A few questions after using
None I can figure out.
Table - what exactly does this do? Does it save a table of the client IP or something?
HTTP Cookie - does this add a cookie on its own? e.g., add an additional cookie that looks like it came from my site?
2. I took the last node out of the load balancer (changed mode to "reject"), and waited for the site to come down. After it did, I changed this node's mode back to "accept", and went back to the configuration page for the port. I saw the status go from MAINT to UP 1/2 to UP, over about 30 seconds or so.
Can you clarify what MAINT and UP 1/2 mean?
3. When I set health check type to HTTP Valid Status, it lets me choose a "check HTTP path". Makes sense. However, I was thinking it would be useful to specify add a host header to be sent as well; my Apache configuration at the moment hosts multiple sites, and requests coming in without a host header are handled differently than other requests. Not a big deal - one could work around this by putting some code in the no-hostname site, or switching between sites using port numbers instead, but it was something I ran into so I thought I'd bring it up.
Nice job on this so far guys!
2 Replies
@gregr:
1. Session Stickiness - the options are None, Table, and HTTP Cookie.
None I can figure out.
:-) Table - what exactly does this do? Does it save a table of the client IP or something?
HTTP Cookie - does this add a cookie on its own? e.g., add an additional cookie that looks like it came from my site?
Oh, I know this one! (But only because I asked the same question in chat)
Yes, table is per-client-IP. Note that the table is not distributed across the balancer cluster, so state will be lost if the cluster fails over to a backup host.
HTTP cookie is a balancer based cookie (currently NB_SRVID) that is managed by the balancer, and should survive cluster failover.
– David
@gregr:
2. I took the last node out of the load balancer (changed mode to "reject"), and waited for the site to come down. After it did, I changed this node's mode back to "accept", and went back to the configuration page for the port. I saw the status go from MAINT to UP 1/2 to UP, over about 30 seconds or so.
If this means that the node balancer does a sort of slow start, then I'll be just deliriously happy. If so, what causes the state change from "UP 1/2" to "UP"? Is it a simple timer or some sort of more advanced heuristic?
I have at least once service that's heavily dependent on caching and it takes a little priming before it can drink from the firehose.