Covering J2EE Security and WebLogic Topics

WebLogic 10 Released

BEA announced that WebLogic 10 has been released for general availability. I don’t know about you, but I haven’t even fully kicked the tires on 9.x yet!

With this release, the big push was for Java EE5, EJB, 3.0, and Spring interoperability. Security-wise, the changes seem to be incremental. Here are the highlights of the security changes:

  • Cross-domain security has been improved. Instead of having two or more domains with the same credentials (crazy!), the credential mapper is used. Sounds like a good improvement…
  • The console can now record your interaction with it as WLST scripts. That’s nifty. I haven’t tried it (nor have I tried WebLogic 10 at all yet) but it has the potential to supersede my MBean-finding techniques described in Find WebLogic MBeans with Ease and Using Audit Logs to Make Scripting Easier.
  • The WebLogic Diagnostic Framework (WLDF) can now poke around in an HTTP session. That sounds like fun! 😉
  • weblogic.jar has been “refactored.” Read the release notes for more information especially if you use custom Java security policies.
  • Support for additional and updated WS-* specifications include WS-SecureConversations 1.3, WS-Security 1.1, WS-SecurityPolicy 1.2, and WS-Trust 1.3.
  • The Windows NT Authentication provider was deprecated.

That’s all of the documented changes in the security arena. I plan on going a little more in-depth on some of these in the near future.

Implicit Groups in WebLogic

WebLogic has some special groups which you would only learn about if you read the documentation. I know, that’s a good one! But seriously, there is a special group for authenticated users and one for all users which I’ll get to in a moment.

Default Groups

Per the documentation, BEA supplies several default groups. The four you probably know about are:

  • Administrators
  • Deployers
  • Operators
  • Monitors

You know about these default groups because they appear automatically in the list of groups within the security realm. Each of these groups is associated with an authentication provider and you can delete them if you wish (assuming you’re aware of the consequences). Furthermore, the documentation states that "users" and "everyone" are also default groups. However, I prefer to call these "Implicit Groups."

Implicit Groups

The implicit groups "users" and "everyone" are not associated with any security provider. Rather, you can think of them as virtual groups spanning all authentication providers. Membership in these implicit groups is dynamically handled by the server.

So, what are these implicit groups?

The "users" Implicit Group

Simply stated, any authenticated user is a member of this group. If an authenticated user otherwise has no group memberships (such as Administrators, StockTrader, etc.), he’ll still be a member of this group.

The "everyone" Implicit Group

All users are members of the "everyone" group whether they are authenticated or not. As such, an authenticated user will be in both the "everyone" and "users" groups. The "everyone" group seems a little silly to me because I can’t think of a good use for it but maybe I’m missing something.

In fact, I know I’m missing something because there is a default global role called "Anonymous." This global role maps to the "everyone" group. However, since the "everyone" group contains anonymous (i.e., unauthenticated) and authenticated users, an authenticated user would have the Anonymous role. Isn’t that like matter and anti-matter colliding?

Using Implicit Groups

What can you do with these puppies?

It’s important to realize that implicit groups are legitimate albeit hidden groups so you can use them for security constraints like any other group. In other words, you can map a role used by a security constraint to the "users" group. You can also query the mapped role with HttpServletRequest.isUserInRole() to see if the user has the role that maps to an implicit group. (NOTE: Don’t let WebLogic 8.1’s default mapping of roles to group names bite you when you move to WebLogic 9.x. See WebLogic 9.1 Authorization Gotcha for more information.)

For example, you might have a scenario where you want your initial web page to be accessible to any user who can authenticate. The user can then determine if they need access to the application and can click a link to request access. Other links deeper into the application would probably have security constraints with application-specific roles like StockTrader which our unprivileged yet authenticated user would not be able to access or even see.

Auditing and Implicit Groups

With the Default Auditor, authorization events include the groups to which the user belongs. But the implicit groups are so implicit that they aren’t listed! It actually makes sense after a moment’s thought — Authorization happens after authentication and an authenticated user is ALWAYS in the "users" and "everyone" groups by definition. As for anonymous users, they haven’t authenticated so there are no audit entries, anyway.

Parting Questions

Can you think of any other uses for implicit groups? Can you enlighten me on the usefulness of the "everyone" group? I look forward to hearing your ideas.

How to Protect Against CSRF Attacks

In Unconventional Warfare, I took a somewhat whimsical approach to describing the challenges of application security today. While the analogy was fun, the message was quite serious. The take-away was that we as developers need to know more about the latest techniques used to subvert our applications.

With this post, I’m going to show you a simple but eye-opening CSRF exploit against WebLogic console. We’ll add a user to your WebLogic security realm without any outward indication that it happened.

Before we go on, it’s important to know what CSRF is. CSRF stands for Cross-Site Request Forgery. Essentially, a malicious website takes advantage of another website’s trust in the user.

I’m just learning about vulnerabilities like CSRF but it occurred to me that administrators would be likely targets for such attacks. Could I compromise my WebLogic server? Turns out I could, and it took about five minutes for me to figure out how to add a user with CSRF. The significance of this paragraph is that someone who has a cursory knowledge of CSRF can cause considerable damage.

As I researched CSRF for this post, I learned that administrators have always been targeted. Also, there are many different ways to pull off the attack and it might be a multi-step process with Cross-Site Scripting (XSS) thrown in for good measure. It’s my intention that my simple demonstration of CSRF will stoke your interest in the subject enough to start defending against these attacks. I’ll supply some techniques for avoiding CSRF vulnerabilities in your applications as well as how to protect your WebLogic console.

OK, time for the demo. You’ll need the following things if you want to try this for yourself:

  • A WebLogic 8.1 domain running on your local machine (I tested against 8.1.4)
  • Your domain has to have the default realm name (myrealm)
  • Your domain has to have the default authenticator named DefaultAuthenticator
  • Your server needs to run on port 80 or 7001

That’s it. You can just take the defaults when you configure the domain and everything will be as required.

With the pre-requisites in place, let’s add the user. Perform these steps:

  1. Fire up WebLogic
  2. Sign into the console
  3. In the same browser session as your console, navigate to my CSRF demo page

See the helpful web page? Now, go back to the console and examine your users. Did you add the user named SpongebobWasHere? I didn’t think so…

The user SpongebobWasHere was added by a CSRF exploit

Perhaps you are wary of going to my demo attack page. I don’t blame you. Security researcher sites don’t display “Best viewed with telnet to port 80” simply for the humor of it. Assuming you don’t want to telnet you have two other choices for seeing the user get added:

  1. Save the link to your machine, examine the contents, and then load that downloaded page in your browser when you’re satisfied that it’s safe
  2. Go to the URL below by copying and pasting it into your browser (I had to add spaces to get the long text to wrap so you’ll have to remove them)

http://localhost:7001/console/actions/security/DoCreateUserAction? cancelAction=%2Factions%2Fsecurity%2FListUsersAction%3F scopeMBean%3DSecurity%253AName%253Dmyrealm&realm= Security%3AName%3Dmyrealm&continueAction=%2Factions%2F security%2FDoEditUserAction%3FcancelAction%3D%252Factions %252Fsecurity%252FListUsersAction%253FscopeMBean%253DSecurity %25253AName%25253Dmyrealm%26realm%3DSecurity%253AName %253Dmyrealm%26provider%3DSecurity%253AName%253D myrealmDefaultAuthenticator&provider=Security%3AName%3D myrealmDefaultAuthenticator& wl_control_weblogic_management_security_User_Name= SpongebobWasHere&wl_control_weblogic_management_security_User_Password =password&dependentPassword_wl_control_weblogic_management_ security_User_Password=password

Change the port as required.

By the way, the URL above is the only “active” ingredient in the demo page. It serves as the source of an IMG tag. Essentially, the IMG tag issues a GET against the server running on localhost at the specified port. All of the parameters in the URL were gleaned from the HTML source of the console Add User page. The interesting stuff is at the end where I specifed the username and passwords.

The reason this exploit works is that the request to your WebLogic server came from YOUR browser. With YOUR cookies. In fact, if auditing is turned on, it looks like YOU did it.

So, there it is — a quick and easily understandable CSRF attack. I’m amazed at the ease with which this was done. Granted, I made several assumptions when crafting the URL regarding the names of things. Changing any one name would have defeated this particular attack, but a determined attacker might have employed other techniques to learn the names.

Protecting WebLogic Console from CSRF Exploits

Since WebLogic 8.1 console is vulnerable to CSRF, one solution is to change the name of the console. Another is to undeploy the console and use the scripting tools, instead. Hiding behind a firewall is not an option. Non-routable IPs are exploitable as you can see from the use of “localhost” in the URL above.

Yet another solution is to log out of the console before browsing anywhere else. This is probably a good habit for anything you need to log in to such as your bank’s web site.

What’s the likelihood that someone would do this particular exploit? Not very likely, but I believe it is possible even if your server is not on localhost. For example, an attacker might check your browser history and notice you’ve been to Hmmm…

Now, I don’t mean to pick on WebLogic console. From what I’ve read, MANY applications are vulnerable to CSRF. BEA might have fixed the problem in 9.x because I wasn’t able to duplicate my success there. So, either they fixed it or I didn’t craft the URL correctly.

Protecting Your Applications from CSRF Exploits

How can you prevent CSRF attacks in your applications? I was hoping you would ask!

First, know that checking the referrer or doing POST instead of GET won’t save you. To have a shot at preventing a CSRF attack, consider the following techniques:

  • Set a short session timeout
  • Use a token for forms
  • Re-authenticate the user or use a CAPTCHA for each important action
  • Have no XSS vulnerabilities

For more information, check out the sources below. From this list you can see that it’s a tall order to prevent CSRF attacks. However, the various techniques add up to hopefully raise the bar high enough to require a skilled attacker. That’s probably the best you can hope for since a skilled attacker will probably get in, anyway.

Further Reading

As I said earlier, I’m just learning this stuff and know enough to whip up the simple demo you’ve seen here. I encourage you to read more about this issue because there are excellent resources out there describing it much better than I can. Here’s a starter set:

Unconventional Warfare

Let’s get medieval.

Imagine a castle, stout and impenetrable even on this ordinary day. Guards mill about with glistening swords at their sides, anxious to try them out. Bored look-outs peer over the parapets, ever watchful for the approach of a mighty army. Prima donna archers play Uno in the towers.

Meanwhile, real life happens. Peasants and traders enter and exit the castle as part of their daily activities. The scent of steak-on-a-stake and cotton candy fills the air. All is well at our imaginary castle.

Or is it?

While the castle is certainly ready for a conventional enemy that would storm the gates and attempt to smash the walls, it was totally unprepared for what happened that fine day.

The crown jewels were stolen.

The lord of the castle is perplexed. After all, the defenses were ready — every man was at his post and ready to engage in mortal combat with enemy knights. Everyone was on the look-out. And still, the jewels are gone.

This castle scenario is what computer security seems like to me these days. We have DMZs and firewalls, intrusion detection, intrusion prevention, encryption, application resources protected by roles, and if we’re lucky, maybe even strong passwords. But more and more I’m coming to the realization that while we need those things, we as developers really need to bring out our inner guerilla. We have a handle on the conventional warfare but we almost never think like a hacker. They’re going to dress like a peasant and mimic the ways of a peasant. They’re going to be the peasant.

They’re also going to rob you blind.

The problem is, thinking like a hacker is not in our nature. Unfortunately, that needs to change. Because quite simply, unlike the crown jewels that physically disappeared, our electronic crown jewels can be stolen and yet simultaneously remain in our possession. It’s the nature of the ones and zeros.

We even need to be security conscious during non-coding activities such as writing uses cases. The reason is that even seemingly innocuous business functions can provide a covert pathway to the crown jewels.

Consider this blog post. Be sure to read the case study.


Did you notice that conventional security techniques wouldn’t have prevented it?

This realization I had — that we need to think more like hackers to protect ourselves — did not just hit me out of the blue. I’m far too dense for that and wouldn’t have felt it. Instead, it comes from reading the blogs of the white hat hackers. In my case, Jeremiah Grossman and RSnake are the ones that scare me on a daily basis. In fact, the blog post above is the work of RSnake. It’s these guys that repeatedly hit me upside the head to make me see things in a different light. And when they hit me, I definitely feel it.

Today, more and more developers are aware of the perils of SQL injection. That doesn’t mean there’s not a lot of vulnerabilities out there but developer awareness of this particular attack is rising. That’s good. Now we need to learn more about XSS and CSRF and do what we can to avoid them. It’s all too easy to read about these attacks and yet not fully comprehend the danger because the descriptions are often too abstract. But folks like Jeremiah and RSnake make us smarter by showing us exactly what the ramifications are in all of the gory detail.

As developers, we need to be familiar with what these guys are writing about. Black hat hackers probably already know this stuff. So should we.

Encrypting Only the Login Page

I occasionally hear people say that they don’t want to use SSL for anything other than the login page for fear of the performance hit that SSL adds. Most recently, I came upon this post which claims to show how to implement such functionality in a J2EE web application.

While the above post seems to have some technical inaccuracies, I’m not writing to criticize it. Rather, I think it’s interesting to consider the ramifications of only encrypting the login page. The issues fall into three categories:

  • Performance
  • Maintainability
  • Security Impacts

Let’s have a look at each category in turn.


Performance is the crux of the reason to only encrypt the username and password. Developers hear that SSL adds a 30% overhead to response times, freak out, and then don’t want to use SSL or to only use it sparingly as this use case indicates. But really, the biggest hit of SSL is the initial setup of the socket. Once that is done, a big chunk of the SSL performance burden is gone. Since the socket is usually maintained until a timeout happens, multiple requests can leverage that same one-time setup cost.

Another thing to realize about SSL performance is that you don’t pay the overhead for your total response time. For example, assume that you have a dynamic page that takes 1 second to complete. Let’s further assume that 25% of that time is socket-level and 75% is making the database call and rendering the page. Clearly, I’m just fudging these numbers but you’re not going to see a 30% increase in the 650ms database access time, for example. By the same token, the more data you present (including images!) the more encryption has to be done which will take time.

Finally, a hardware SSL accelerator will go a long way toward reducing the SSL overhead. I don’t have any numbers for this but encryption in hardware will beat encryption in software any day.


While it doesn’t seem like this should be the case, encrypting only the login page introduces a software maintainability issue. Specifying that the login page should have a transport-guarantee of CONFIDENTIAL in web.xml will do the trick of ensuring that SSL is used for the login. No custom coding is required.

However, once the browser has switched to the SSL port, how do you declaratively tell it to go back to the clear (non-SSL) port after login? The answer is you can’t. Even though other resources defined in web.xml may not have the CONFIDENTIAL guarantee, the server will not automatically switch back to the clear port.

I can only think of two ways to switch back to the clear port — a servlet filter or links.

By using a servlet filter you could examine the request and then redirect to the clear port for anything other than the login page if the request was made over SSL. Do-able but not pretty.

The other way is to have all of your links specifically refer to the clear port with something like http://whatever/somepage.jsp. Clicking such a link will cause the request to be serviced without SSL but now you have hard-coded links all over the place. You could also build your link dynamically but then you’d have that code to maintain.

So far, we’ve talked about performance and maintainability. Now it’s time to consider the security aspects…


In this section you’ll see what that 30% SSL premium buys you.

It makes sense that you’d want to protect the user’s password and perhaps even the username itself. You wouldn’t want anyone sniffing the wire and finding those goodies. But by not encrypting the link post-login, all data sent or received from that user can be sniffed. You require authorization for the user to see the data but allow anyone with a packet sniffer to see social security numbers, medical conditions, or whatever sensitive data your application handles. Shouldn’t that be kept safe, too?

Perhaps you think you’re safe because you’re not going out over the internet. That is a good thing, of course, but the majority (80% is what sticks in my head) of attacks are from insiders. Kinda risky.

You’ve now seen that sensitive data can be sniffed from the link. Here’s where it gets really interesting…

You’ve encrypted the username and password during login so that no bad guys can login as that user in the future. But since subsequent traffic is unencrypted you’ve just given away the keys to the kingdom. How’s that? Session hijacking.

Now that traffic is unencrypted, someone sniffing the network can see session IDs either as a request parameter or in a cookie. Given this information and an active session on the server, the malicious use can become the previously authenticated user and access the application as if he were the legitimate user. Scary stuff…


I think you can tell by now what side of the fence I fall on. I think the security issues alone call for SSL during and after login. That 30% premium allows me to sleep at night.

Did I forget or overemphasize anything? Please post your thoughts below.

Fallback Authentication in WebLogic 9.2

Ah, nothing like blogging near a crackling fire. Sort of sounds like a line from a Christmas song, doesn’t it?

Speaking of hot topics, I recently noticed that BEA added the concept of container-managed fallback authentication to WebLogic 9.2. What’s fallback authentication? Here’s an example of a use case that I’ve heard more than once:

Imagine your application uses client certificates to authenticate users. Unfortunately, sometimes reality is less than tidy and you find that some users don’t have certificates nor are they able to get them for some reason. You’d like to be able to use form-based authentication if the user doesn’t present a valid certificate. In other words, if client certificate authentication fails, you want to have a fallback method for authentication.

You can do this today with any servlet engine but you’ll have to handle security on your own. Container-managed security is out because the servlet specification does not consider fallback authentication. After all, you can only supply one auth-method in web.xml. (For the literary buffs among you, the previous line is a foreshadowing detail.)

So now back to WebLogic’s support for fallback authentication. The documentation for it is here. Essentially, you have two choices for enabling fallback authentication:

  1. Supply a comma-separated list of auth-methods in web.xml
  2. Specify the REALM auth-method in web.xml which ultimately grabs the comma-separated list from the security realm

I tried the first approach first. Just kinda made sense. Here’s the snippet from my web.xml:


This login-config element should allow me to implement the use case given above. However, upon deployment, WebLogic complained with the following message:

Invalid auth-method list – CLIENT-CERT,FORM as the auth-method in web.xml, which is not valid. Valid values are BASIC (default), FORM and CLIENT-CERT.

Well, I could have told it that. Wasn’t WebLogic going to do some proprietary magic to get past the deployment descriptor compliance check? Apparently not, and I don’t know of any way to turn it off. I even removed the web-app DTD DOCTYPE entry in web.xml to no avail.

I’m just tenacious (stupid?) enough to try the other approach where you specify REALM as the auth-method in web.xml. Not surprisingly, it also failed compliance checking. What is surprising is that it didn’t fail because the auth-method was the non-standard "REALM" but because it was null. Now, I knew I hadn’t set the auth-methods attribute on the RealmMBean as described in the doc, but I was doing test-driven configuration so I had to see it fail first. 😉

I suspected that WebLogic was correctly substituting the auth-methods value in the RealmMBean when it encountered the REALM auth-method in web.xml. The value just happened to be null. So, I fired up WLST, set authMethods to "CLIENT-CERT,FORM", and saved the change. After restarting WebLogic, the exact same error occurred as with the plain web.xml method. The comma-separated list was not valid. However, WebLogic did correctly substitute the value from the realm when it saw the REALM auth-method in web.xml.

Sorry to say, but there’s no happy ending here. I can only surmise that BEA’s implementation was half-baked and wasn’t supposed to make it into the docs. Perhaps the next release of WebLogic will have a working implementation. In the meantime, we’re stuck with rolling our own fallback authentication scheme. It’s also very possible that I missed a step somewhere. Please let me know if I did.

Well, the fire’s not really dying, but here I am goodbye-ing… 😉

Troubleshooting Authentication Issues with Audit Logs

In Common Problems with Authentication Provider Configuration, I wrote some tips for troubleshooting authentication/authorization problems in WebLogic. This post is a continuation of that one and shows how to decipher an audit log for troubleshooting purposes.

Audit logs are truly a secret weapon. Sure, they allow you to do the normal audit log things like going back in time to see who did what and when, but that stuff is for your security guy. As a developer, audit logs shine for other reasons such as these uses that I wrote about previously:

While Common Problems with Authentication Provider Configuration discussed the relationship between authentication and authorization, this post will highlight each as a discrete step and make it clear as to why a user can’t access a resource.

Laying the Groundwork

We’re going to look at three authentication/authorization scenarios to compare the audit log output:

  • Successful Authentication and Authorization
  • Authentication Failure
  • Authorization Failure

You can run through the scenarios on your own WebLogic domain if you’d like. For this demonstration I started with a new domain and added a “Managers” group which will simply be an extraneous group that you’ll see in the output. I then added a user named “squidward” and added him to the the Administrators and Managers groups. I also added user “spongebob” but only included him in the Managers group.

We now have two users in the Managers group. Squidward is in the Administrators group and will be able to access the WebLogic Admin Console application. Spongebob is not an administrator and will not have access.

Finally, I created a new DefaultAuditor (see WebLogic Auditing for how to do this) and restarted WebLogic.

Now let’s try it out…

Successful Authentication and Authorization

I pulled up the Admin console and logged in as Squidward. Here’s the audit log output:

#### Audit Record Begin <Sep 16, 2006 10:24:23 AM> <Severity =SUCCESS> <<<Event Type = Authentication Audit Event><squidward><AUTHENTICATE>>> Audit Record End ####

#### Audit Record Begin <Sep 16, 2006 10:24:23 AM> <Severity =SUCCESS> <<<Event Type = Authorization Audit Event ><Subject: 3
Principal = class“squidward”)
Principal = class“Administrators”)
Principal = class“Managers”)
><ONCE><<url>><type=<url>, application=console, contextPath=/console, uri=/, httpMethod=GET>>> Audit Record End ####

You can see that we have two events representing authentication and authorization. The Authentication event indicates that Squidward successfully established his identity. Furthermore, the Authorization event shows that he was permitted access because he’s in the Administrators group.

Note that the authorization event contains some useful extra information. First, we see that it’s Squidward who accessed the resource (/console). We also see that he is in the Administrators and Managers groups. Inclusion in the Administrators group is his ticket to the Console application, but you also now know that he happens to be in the Managers group even though the Console application does not care about that group.

We now know what the “happy path” looks like in the audit log. Now for the not-so-happy paths…

Authentication Failure

I logged Squidward out of the Console application. I then tried to log in as Squidward again but this time I supplied a wrong password. The audit log output is shown below:

#### Audit Record Begin <Sep 16, 2006 7:35:59 PM> <Severity =FAILURE> <<<Event Type = Authentication Audit Event><squidward><AUTHENTICATE>>> <FailureException [Security:090304]Authentication Failed: User squidward [Security:090302]Authentication Failed: User squidward denied> Audit Record End ####

Note that this time there is only a failed authentication event. WebLogic did not attempt authorization because it could not establish the user’s identity. Naturally, you can only check a user’s permissions if you know who the user is.

In this example, I provided a correct username but an incorrect password. I could have gotten the exact same output if I had provided a username/password for a non-existent user. In each case, no identity could be established.

Authorization Failure

For this scenario, I logged in as Spongebob with the correct password. The resulting audit events are shown below:

#### Audit Record Begin <Sep 16, 2006 7:39:40 PM> <Severity =SUCCESS> <<<Event Type = Authentication Audit Event><spongebob><AUTHENTICATE>>> Audit Record End ####

#### Audit Record Begin <Sep 16, 2006 7:39:40 PM> <Severity =FAILURE> <<<Event Type = Authorization Audit Event ><Subject: 2
Principal = class“spongebob”)
Principal = class“Managers”)
><ONCE><<url>><type=<url>, application=console, contextPath=/console, uri=/, httpMethod=GET>>> Audit Record End ####

The output here is very similar to the successful first scenario. Spongebob authenticated just fine but was not authorized to access the Console application. The reason is evident in the event itself — Spongebob is not a member of the Administrators group.


You can see that the audit log can help you track down access problems. Was it a bad username/password combination or simply an authorization error? Output from your application (or Console as used here) doesn’t help you make the determination. In fact, the two very different failure scenarios showed the exact same error message on the Console login page. Audit logs show the real reason.

The other nice thing about authorization events is that the output shows the groups for which the user is a member. That sometimes comes in handy.

If the tips here weren’t sufficient to solve your problem, your last resort is security debugging. By enabling security debugging you can get copious amounts of output from the security providers to see exactly what is going on. Here are some links to get you going:

Happy auditing!

Common Problems with Authentication Provider Configuration

I haunt the BEA security forum and often see people struggling with common authenticator configuration issues. One of two things usually happens:

  • The server won’t boot
  • A user can’t authenticate or authorize

I’ll show you how to fix these problems after a brief explanation of what authenticators do. This will give you context for understanding the solutions and handling future troubleshooting sessions.

What Authenticators Do

Authenticators are responsible for authenticating users as part of the WebLogic security framework. I know you want to throw your monitor at me for making such an obvious statement. Even my cat smacked me when I wrote it. However, what’s not so obvious is that authenticators have a supporting role in the authorization process.

Authenticators deal with principals. In WebLogic, principals are users and groups. Thus, an authenticator can tell the security framework that a user successfully authenticated or not but it can also say to which groups a user belongs. It does this by creating a Subject object and populating it with the username and group names for which the user is a member. This information is used by the role mapping providers which in turn feed the authorization providers. The authorization providers make a decision based on the security policy of the requested resource and the information provided by the other security providers. This is how an authenticator can cause authorization problems. We’ll see more on that in a bit.

Out of the box, users are stored in WebLogic’s embedded LDAP. This means all “normal” users as well as the WebLogic administrative user typically named “weblogic” or “admin” are stored there. Additionally, groups are also stored in embedded LDAP. Users and groups are stored there because the embedded LDAP serves as the DefaultAuthenticator’s data store. Remove the DefaultAuthenticator and the users and groups in embedded LDAP will not be used. Or, you could add another authenticator which would have its own user/group storage. Now, the users and groups known to WebLogic encompass both data stores.

There’s one more critical piece about authenticators and that’s the Control Flag. Each authenticator has a Control Flag that can be set to REQUIRED, REQUISITE, SUFFICIENT, or OPTIONAL. Each flag indicates how the authenticator will be treated by the security framework and whether or not that authenticator has to be able to successfully authenticate the user or not. You can find an explanation of these flags here.

With the authenticator explanations out of the way, let’s move on to the problems and their solutions.

Getting the Server to Start

Starting WebLogic requires authorization. The boot identity (the main WebLogic administrative user) is used to do it. Just like any other protected resource, authentication and authorization is delegated to the security framework. With the normal security configuration, the DefaultAuthenticator will find the boot user, authenticate it, populate the Subject with the groups, and start-up will continue assuming the authorizers are pleased.

You must have at least one authenticator in a security realm. One of those authenticators must be able to find and authenticate the boot user. If not, the server won’t start and you’ll get the following error message:

Authentication denied: Boot identity not valid

When this happens, check the following:

  1. That the boot identity (normally “weblogic”) is stored in a data store managed by an active authenticator
  2. That the boot identity is in the “Admin” group
  3. Authenticator order matters — check the control flags of all active authenticators. Forgetting to change the DefaultAuthenticator to OPTIONAL or SUFFICIENT in multiple authenticator configurations is the leading cause of authentication issues
  4. That contains the correct username/password (or you’re typing in the correct username/password when you’re not using

Essentially, getting the server to start requires that a user be authenticated and authorized. This sounds an awful lot like…

Authenticating Users for an Application

That’s right. The same principles apply to “normal” authentication and authorization in J2EE applications. A common problem is trying to login to a web application only to get the login page again. As mentioned above, you don’t really know if the failure to access the page is from an incorrect username/password or the simple fact that the user (while properly authenticated) is not authorized for access.

Check the following when you can’t log in to your application:

  1. That the user is located in a data store managed by an active authenticator
  2. That the role mentioned in the security constraint in web.xml maps to a principal (usually a group) in weblogic.xml (For WebLogic 9.x, you can read WebLogic 9.1 Authorization Gotcha for a painful lesson I learned)
  3. Authenticator order matters — check the control flags of all active authenticators. Forgetting to change the DefaultAuthenticator to OPTIONAL or SUFFICIENT in multiple authenticator configurations is the leading cause of authentication issues.

This checklist is essentially the same as the first list because the two problem areas are actually the same type of problem. Starting the server is just a special case due to the mechanics of specifying the boot identity (such as via

The tips given above should have you well on your way to solving your authentication/authorization problems.

Identity Assertion

If your user authenticates with a perimeter token such as a client certificate, most of the tips above still apply. The difference is that 1) there’s an identity asserter provider for the token type in question; and 2) that token has to be mapped to an existing user in the realm.

The first difference is fairly obvious. It’s the second one that can cause some problems. Regardless of the token type, somehow its contents must map to a user known to WebLogic. For example, if you configure an identity asserter to handle X.509 certificates, you might indicate that the user name is the email address within the certificate. In that case, your user data store must contain that email address just like the data store must contain the username and password for username/password authentication. By the same token (yes, the pun was definitely intended), the user must be in the appropriate group for authorization purposes, just like in all of the scenarios above.

For more information on X.509 identity assertion, see Mutual Authentication in Action.

Multiple Security Realms

Up until now, I never mentioned a domain with multiple security realms. The reason I didn’t mention it is that there can only be one realm active at a time. Inactive ones are not used at all. Referencing an inactive realm in web.xml has no effect. In fact, the realm name in web.xml is totally arbitrary and regardless of what you specify, the active realm will be used. See WebLogic Security Framework Overview for more information.

Even so, WebLogic’s functionality in this regard can cause you grief if you configured an authenticator in an inactive realm. The fix, of course, is to either make your new realm the active one or add the extra authenticator to the active realm.


This post described the authentication and authorization process. When troubleshooting authentication and authorization problems, consider the following:

  • Is the username and password correct?
  • Is the security constraint mapped to the role you think it is?
  • Is the role mapped to the proper group?
  • Is the user actually in that group?
  • Is an authenticator properly configured to point to the data store containing the user and group?
  • For multiple authenticators, are the control flags and authenticator order appropriate?
  • If you have multiple security realms, realize that only one is active at a time

In my next post, I’ll show you some troubleshooting techniques that quickly make the cause of the problem jump up and bite you on the nose.

Mutual Authentication in Action

This post is a continuation of the Fifteen Minute Guide to Mutual Authentication. In that post, I walked you through configuring WebLogic for two-way SSL, or mutual authentication. It was a whirlwind tour whose purpose was to drive home the essentials of PKI theory while emerging with a simple working implementation.

This post picks up where the other left off by having the user’s certificate suffice for web application authentication. In other words, with mutual authentication the user does not have to log in with a username and password. Instead, the user’s certificate is his ticket to ride.

Here’s what we’ll do in this post:

  • Create a user
  • Configure identity assertion in WebLogic
  • Configure a web application to use CLIENT-CERT authentication
  • Configure role-to-principal mapping in weblogic.xml

The pre-requisite of this post is a browser that will accept the server’s certificate and a server that will accept the browser’s client certificate. If you don’t already have that set up, refer to the Fifteen Minute Guide to Mutual Authentication and follow the steps there. You can also use certificates issued by other Certificate Authorities (CAs) but you must be able to connect the browser to the server as described in the guide. If not, the steps in this post will not work for you until the basic problem is resolved.

Enough introductory babble. Let’s get to it.

Create a User

When we’re done with configuration, client certificates will map to known users in the WebLogic security realm. I’m assuming here that you are using the DefaultAuthenticator which stores users and group information in WebLogic’s embedded LDAP.

You need to add a user via WebLogic Console. Note that the steps given here are for WebLogic 8.1.4 but they are approximately the same for WebLogic 9.x. To add a user, navigate to the Security->Realms->myrealm->Users node in the applet.

On the right-hand side, click on Configure a new User. In the General tab, enter the user ID in the Name field. If you used the example from the Mutual Authentication Guide, enter "Spongebob". Otherwise, enter the value of the Common Name (CN) of your test certificate. Enter and confirm a password. The password won’t be used for mutual authentication, but make it a strong password anyway.

Click Apply and then select the Groups tab. Add the user to the Administrators group by moving "Administrators" to the Current Groups box. Click Apply.

You now have a new user in the Administrators group. The user ID for this user matches the CN of the client certificate you’ve loaded in your browser.

Configure Identity Assertion

Next up is identity assertion configuration. Identity assertion is the process by which a token from the request is mapped to a known user.

You need to configure WebLogic to use an X.509 certificate as the authentication token. Furthermore, you’ll configure a username mapper to map the certificate’s Distinguished Name (DN) to a user ID. The username mapper is part of the identity asserter.

These steps assume you are using the DefaultIdentityAsserter. To configure it, navigate to the Security->Realms->myrealm->Providers->Authentication->DefaultIdentityAsserter node in the applet.

On the General tab, the User Name Mapper Class Name and Trusted Client Principals fields can remain blank. For Types, however, move X.509 to the Chosen side and remove AuthenticatedUser. An identity asserter can only support one type at a time. Click Apply.

Click the Details tab to configure username mapping. Check the Use Default User Name Mapper field. Select CN for the Default User Name Mapper Attribute Type. Leave Default User Name Mapper Attribute Delimiter and Base64DecodingRequired set to "@" and selected, respectively. Click Apply and then restart WebLogic for the changes to take effect.

What you’ve done is told WebLogic to use its default username mapper to map the certificate to the user. It does this by pulling the CN value from the DN of the certificate. The default usernmame mapper can handle these basic functions. If you have more complicated mapping requirements, you can write a custom username mapper and specify it on the General tab in the User Name Mapper Class Name field.

Regardless, given a valid client certificate, if the username mapper emits a user ID that can be found in the security realm, the user will be authenticated.

Configure a Web Application

At this point, you’ve defined a user whose username matches the CN of the client certificate. You’ve also told WebLogic how to map client certificates to users.

You now need to configure your web application to use what you’ve configured. You only need to do three things:

  1. Apply a security constraint to a resource in web.xml
  2. Set the authentication method to CLIENT-CERT in web.xml
  3. Configure role-to-principal mapping in weblogic.xml

All three of these changes are made in the web application’s deployment descriptors. I’ve provided a sample web application for you that can be deployed without changes if you want to skip the editing. The application also shows how to pull the user’s certificate from the request if you need to (although you usually don’t).

You must protect a resource with a security constraint for the security framework to kick in. Here’s an example security constraint from web.xml:

      <display-name>Example Security Constraint</display-name>
         <web-resource-name>Protected Area</web-resource-name>

With a security constraint in place, you need to tell WebLogic how you want it to handle authentication. You’re probably most familiar with the FORM authentication type for doing username/password authentication. However, we want to extract the certificate from the two-way SSL session that’s already been established. To do this, we use the CLIENT-CERT authentication type. (You can find more information on the various authentication types here.) Here’s the pertinent snippet from web.xml:


With this web.xml file in place, we’ve told WebLogic what we want to protect and how users should be authenticated. When the user requests a protected resource, WebLogic will notice and the security framework will spring into action. Before a user can be authorized to access the protected resource, WebLogic first has to determine who the user is. Since we told it to use the certificate from the SSL connection, WebLogic will extract the certificate and hand it to the identity asserter. The identity asserter will attempt to map the certificate to a known user using a username mapper class. If it finds a match, the user is authenticated.

After authentication, authorization checks kick in and the request will be processed if the user is allowed access.

The last thing to do is to map the "Admin" security role defined in web.xml to one or more principals. This mapping is done in weblogic.xml. We’ll map the "Admin" role to the "Administrators" group so that any user in the Administrators group will have the Admin role and will thus be granted access. Here’s the relevant snippet from weblogic.xml:


At this point, deploy your application (or use the sample) and see if you can log in with the certificate.

If it doesn’t work, troubleshooting is straight-forward. Since you already have two-way SSL working properly, there should be no problem with the server, client, or CA certificates. Here’s what could be wrong:

  • Incorrect security constraint
  • Incorrect role to principal mapping in weblogic.xml
  • Not requesting a web page with a security constraint
  • Not using https
  • Incorrect username mapper configuration
  • Non-existent user (given the username mapping)
  • User is not in the appropriate group for the security constraint

Once everything is correctly configured, you should be able to log in with only the client certificate.


You now have a web application that uses private/public keys for a very strong form of authentication. In the implementation shown here, the user’s public certificate does not exist on the server side. Rather, the user’s certificate is trusted because the server trusts the certificate’s issuer. Assuming the certificate is trusted, WebLogic then maps the certificate to a known user which can have the normal group memberships as any user.

Finally, thanks to the J2EE specification, your application does not have to deal with any of the mutual authentication machinery other than specifying in web.xml that you want it.

Security Debugging in WebLogic 9

The technique for getting debug information from the WebLogic security framework changed in WebLogic 9.x. In fact, BEA has made setting debug flags easier across the entire server.

In the console, navigate to

Servers -> AdminServer -> Debug

and note all of the goodies for which you can get debug information.

Getting back to the security framework, continue navigating to

weblogic -> security

and enable debugging for your areas of interest. Debug output is immediate after you activate the changes.

Thanks to Marky Middleton on the BEA forums for the tip!

Read Security Realm Logging in WebLogic 8.1 if you want debug output from an 8.1 server.

« Previous Entries   Next Entries »

Bookmark this page on