We know the OpenAM Session Failover (SFO) prior to OpenAM 10.1.0 was using the Sun Java Message Queue. Pretty complex product which makes debugging a little challenging at times (well, depending on how well trained you are). I have numerous posts in my blog talking about Sun Java Message Queue and how SFO works.
The new OpenAM roadmap since version 10.1.0 is to completely remove this cumbersome layer on the bottom and replaced it with OpenDJ. The Core Token Service (CTS) has been rewritten to use OpenDJ to store users' sessions/tokens.
Session failover also relies on a shared Core Token Service (CTS) to store user session data. The service is shared with other OpenAM servers in the same OpenAM site. When an OpenAM server goes down, other servers in the site can read user session information from the CTS, so the user with a valid session does not have to log in again. When the original OpenAM server becomes available again, it can also read session information from the CTS, and can carry on serving users with active sessions.
From operational perspective, this new feature is very much welcomed. Customers do not have to learn additional technology like Sun Java Message Queue. Instead, what they have to focus on are OpenAM and OpenDJ. And I must say both are relatively easier to pick up. Forget about Sun Java Message Queue!
Now, there is 1 thing to take note of for multi-instances deployment. One has to read carefully the documentation - CTS Deployment Scenario.
I quoted from the documentation below:
To reduce the impact of any given failure, consider the following options:
- Start your implementation, if possible, with the CTS options available with the OpenDJ instance embedded in OpenAM. You can still set up a different backend on the embedded OpenDJ server. If the embedded OpenDJ server can handle your requirements, it will simplify implementation of CTS.
- Isolate the user, configuration, and session stores from OpenAM in separate external OpenDJ servers.
- Configure multiple directory stores for CTS, set up with load balancer(s).
- Add separate servers for data store replication. For more information on how this is done with OpenDJ, see the OpenDJ documentation on Stand-alone Replication Servers.
- Set up redundancy in the load balancer connections between OpenAM and the external data store.
There is only a single field to key in the Directory Name. The code cannot handle multiple OpenDJ in the backend. This is weird because the existing Authentication module and Data Store module are able to load-balance and failover multiple OpenDJ backends.
Unless there is a technical reason for intentionally not to support multiple OpenDJ from CTS layer. Well, I do not have the answer. I just find it a step backward with this new feature.
.
Have you tested the session failover with external config/session store (v11)? If you leave the CTS settings to default (don't touch it), the session failover still works.
ReplyDeleteIt seems to take the config store settings automatically (need to set 2 ldap servers for directory configuration for each server)
so... don't understand what's all this LB talk in the docs is about.
I do not quite understand what's your deployment architecture like. Can't comment. I did, however, found out the External Store Configuration is not available for 11.0.0. It is only available in later version. I think the screenshot above was captured from 11.0.1.
ReplyDelete