The role of the admin server in a cluster

And another thing I don’t get…

In Vista 3 we were told that the admin server didn’t even have to be running all the time. It really only needed to be running during start up and shut down of the entire cluster. Does anyone else remember it that way?

Now, in Vista 4, there is apparently regular communication between the admin server and the Nodes relative to their existence and health. Does anyone know how the messaging between the Admin server and the Nodes works relative to JMS server migration?

Please. Inquiring minds need to know.

3 thoughts on “The role of the admin server in a cluster

  1. The nodes in a Weblogic cluster communicate their health via multicast messages. Should they get a request intended for another node, then they ask that node if its there, and failing to find it, ask who has the replica of the admin (not positive about the asking the admin node bit).
    The admin node tracks which nodes are available in the config.xml (even in Vista 3). Should it not be able to communicate with a node, then it will consider that node down.
    We have had occasions where the admin node disconnected from all other nodes in the cluster. The managed nodes acted in a zombie like fashion. Replication was likely failing. All JMS services (mail, chat, and LC creation) failed.
    The JMS service would not stick to any target even once the admin node was restored to service. Thankfully we have established a subpool to which no students should access as the possible targets for the JMS services and were able to shut down all those nodes to force the services back to the proper location. This was very frustrating.
    I hope this helps.

  2. If you’re on App Pack 2, there’s an important Support Bulletin on Behind the Blackboard titled:
    “JMS Node Issue After Upgrade or Fresh Install of Vista 4.2”
    We had a situation where our first node came down and the JMS server did not migrate properly, despite the deceptive “Completed” message. This little work-around helped us out.

  3. I considered the multicast ‘heartbeat’ to be sort of a formality in a load balanced cluster. I can see if Node A, the default target JMS server, were to miss a couple of ‘heartbeats’ the Admin might order a JMS migration (is that done over multicast as well?), it shouldn’t order a failover if one of the other Nodes doesn’t report in…
    Okay, so then what happens if the Admin box is down, then what? Shouldn’t JMS continue blithely along unaware that the Admin box is gone?

Comments are closed.