Covering J2EE Security and WebLogic Topics

Learning how to write secure web apps

If you write web applications you owe it to yourself, your company, and your users to have some knowledge of the exploitation techniques that will be used against it. Knowing the techniques helps you write more secure code.

The obvious way to learn about such things is to read books or security web sites. The more interesting (OK, fun!) way of doing it is to actually perform the exploits against against a purposefully insecure web application that’s built to be hacked.

OWASP has had such an application for years. WebGoat is an insecure J2EE application that provides lessons on how to exploit the weaknesses. I tried this awhile ago and it’s really eye-opening from a non-hacking developer’s perspective.

Google has just created a similar application called Jarlsberg. Jarlsberg runs on Google’s AppEngine and is written in Python. However, language choice doesn’t matter much when it comes to security vulnerabilities in web applications. Like WebGoat, Jarlsberg teaches you how to perform the exploits in a series of hands-on lessons.

I haven’t tried Jarlsberg yet but it’s on my list of things to do.

Happy hacking!

Good primer on web security

Christian Heilmann wrote a nice post on web security topics. He gives an overview of the following attacks:

He also provides some practical tips for being safer on the web (as a user) and things you can do on your server and in your applications to be more secure. Neither his tips nor the article itself are about WebLogic but certainly most of the information isΒ  relevant to any server and a good reminder in any case.

Certificate to User Mapping in WebLogic

A reader of my Fifteen Minute Guide to Mutual Authentication post commented that perhaps the burden of doing mutual authentication (two-way SSL) isn’t worth the effort since you still have to map certificates to users. I’ll admit, that does seem like a bummer.

However, since most companies already have a list of users in LDAP or a database it’s probably not a big deal most of the time. WebLogic has several authentication providers that you can use to tap into your existing user store. Still, I think it’s a great question worth exploring so let’s consider what’s involved.

Strong Authentication

Having the user provide a certificate to the server is a form of strong authentication. No password traverses the network and the user must possess the certificate file. It’s extremely difficult to spoof a user due to the cryptography involved. The user is most definitely authenticated when the two-way SSL link is successfully created. In other words, they are who they say they are assuming you trust the Certificate Authority (CA) who signed their certificate.

Strong authentication alone is the primary benefit of using mutual authentication, of course. The user is who they say they are. We now need to turn our attention to authorization so that we can determine whether or not they can access the system.

Implicit Groups

If you don’t want to maintain cert-to-user mappings you can leverage WebLogic’s implicit groups. See the link for more info but you could use the “users” group to authorize users without maintaining a user list. The “users” group represents any authenticated user. When using mutual authentication, any user identified by their certificate will automatically be in this group. If you can get away with one role in your application then the “users” group may work for you. However, be sure to read about CAs below.

Certificate Authority Scope

The huge drawback to using the implicit “users” group in this way is that any certificate signed by the CAs you trust will be acceptable to your application. If you’re using the default JDK truststore “cacerts” you’ll wind up trusting the signed certificates of all the major and a ton of minor CAs from around the world.

The fix, of course, is if you control the CA. For example, if your company has a CA, you can configure your truststore to only have that CA’s certificate. Then, you only accept users from your company (and partners if they get certs signed by your company’s CA).

In this scenario you will still allow anyone signed by the company’s CA to have access to your app. Whether that’s acceptable or not is a business decision. If you go this route make doubly sure that you ONLY trust your company’s CA.

Deprovisioning Users

When you don’t have a list of users how do you deny them access when they quit?

Revoking the certificate is the obvious answer. Checking certificate revocation requires a custom security provider and is outside the scope of this article. However, the implication is that your public key infrastructure must be mature enough to have OCSP or CRLs for you to check revocation in the first place.

Conclusion

To summarize, it is possible to use mutual authentication without maintaining a list of users. However, you need to exercise extreme care when doing so. Also, authorization flexibility is reduced to just one possible role since there’s only one group to which to map.1


Footnote:

1. Technically, you can do the mapping to multiple roles if certain preconditions exist and you’re willing to do some extra work. As an example scenario, let’s say you use your own CA. Your CA uses OU fields in the DN which you might be able to leverage for role mapping. You’d have to write your own authentication provider to add groups to the subject based upon the OU information and then you can map the roles to the groups as usual in your application’s deployment descriptors.

WebLogic 10 Active Directory Authentication Provider Bug

Reader Cobbie Behrend emailed with a bug he noticed in the Active Directory authentication provider in WebLogic 10. He writes:

“The class that handles AD authentication has a small bug in it that causes authentication to fail [at a later time] after someone logs in incorrectly. During authentication the AD provider binds twice using the same LDAP connection, once with the username password being authenticated, and once with the credentials supplied when you configure the LDAP provider. If authentication fails, the second binding doesn’t happen, and the unauthenticated LDAP connection is returned to the internal LDAP connection pool. This poses a problem when later trying to authenticate and the unauthenticated LDAP connection is retrieved from the pool (you get a stack trace from netscape LDAP classes telling you that the connection has not been bound).

Below is the nested stack trace that you get from WebLogic. The really confusing part when you try to figure this one out is that the point of failure changes, as it all depends on when the bogus connection is being used… also if you are using the same AD user for WebLogic configuration of LDAP, and for testing your application (typical bad development behavior), you don’t notice that the connection is bogus when you turn security logging on. So below the failure is at getDNForUser, but I’ve also seen it happen getting the group members of a group (when testing using a different user).”

netscape.ldap.LDAPException: error result (1); 00000000: LdapErr: DSID-0C090627, comment: In order to perform this operation a successful bind must be completed on the connection., data 0, vece
at netscape.ldap.LDAPConnection.checkMsg(Unknown Source)
at netscape.ldap.LDAPConnection.checkSearchMsg(Unknown Source)
at netscape.ldap.LDAPConnection.search(Unknown Source)
at weblogic.security.providers.authentication.LDAPAtnDelegate.getDNForUser(LDAPAtnDelegate.java:3310)
at weblogic.security.providers.authentication.LDAPAtnDelegate.authenticate(LDAPAtnDelegate.java:3180)
at weblogic.security.providers.authentication.LDAPAtnLoginModuleImpl.login(LDAPAtnLoginModuleImpl.java:200)
at com.bea.common.security.internal.service.LoginModuleWrapper$1.run(LoginModuleWrapper.java:110)
at java.security.AccessController.doPrivileged(Native Method)
at com.bea.common.security.internal.service.LoginModuleWrapper.login(LoginModuleWrapper.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:769)
at javax.security.auth.login.LoginContext.access$000(LoginContext.java:186)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:683)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
at javax.security.auth.login.LoginContext.login(LoginContext.java:579)
at com.bea.common.security.internal.service.JAASLoginServiceImpl.login(JAASLoginServiceImpl.java:93)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at com.bea.common.security.internal.utils.Delegator$ProxyInvocationHandler.invoke(Delegator.java:57)
at $Proxy11.login(Unknown Source)
at weblogic.security.service.internal.WLSJAASLoginServiceImpl$ServiceImpl.login(Unknown Source)
at com.bea.common.security.internal.service.JAASAuthenticationServiceImpl.authenticate(JAASAuthenticationServiceImpl.java:82)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at com.bea.common.security.internal.utils.Delegator$ProxyInvocationHandler.invoke(Delegator.java:57)
at $Proxy32.authenticate(Unknown Source)
at weblogic.security.service.PrincipalAuthenticator.authenticate(Unknown Source)
at weblogic.servlet.security.internal.SecurityModule.checkAuthenticate(SecurityModule.java:256)
at weblogic.servlet.security.internal.SecurityModule.checkAuthenticate(SecurityModule.java:205)
at weblogic.servlet.security.internal.FormSecurityModule.processJSecurityCheck(FormSecurityModule.java:245)
at weblogic.servlet.security.internal.FormSecurityModule.checkUserPerm(FormSecurityModule.java:200)
at weblogic.servlet.security.internal.FormSecurityModule.checkAccess(FormSecurityModule.java:91)
at weblogic.servlet.security.internal.ServletSecurityManager.checkAccess(ServletSecurityManager.java:82)
at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2076)
at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2046)
at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1366)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:200)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:172)

Cobbie points out that you don’t even see the stack trace above unless you enable authentication debugging. He says, “In the console you must set environment -> servers -> AdminServer -> Logging -> Advanced -> Severity Level to debug. To turn on security logging you have to change environment -> servers -> AdminServer -> Debug -> WebLogic -> security -> atn -> DebugSecurityAtn to enabled.”

Cobbie contacted Oracle and received a patch but it doesn’t seem to be a “normal” one in that it was not given a unique name like most patches. Instead, Oracle sent him a new cssWlSecurityProviders.jar file via email.

Cobbie notes:

“The patch works, but only if you have “Use Retrieved User Name as Principal” set to false. If a user sets “Use Retrieved User Name as Principal” property to true the connection should be returned to the pool for the retrieved user (something that was also fixed by this patch).

This works great if a correct password is passed in. However if incorrect credentials are passed in and you have “Use Retrieved User Name as Principal” set to true, you have the same issue as before. The unauthenticated connection gets added to the LDAP connection pool, and future attempts to use it fail.

It appears that there still is no check to confirm that the connection is authenticated before the connection is returned to the pool (or from the pool).”

Hopefully, this post will spare others the troubleshooting effort when they encounter this subtle bug.

Thanks Cobbie!

The WebLogic Administration Port

Is your WebLogic console available to anyone on the Internet? A quick Google search might be eye-opening:

http://www.google.com/search?q=%22Administration+Console%22+%22Sign+in%22+WebLogic+inurl:%22console%22&hl=en&filter=0

If you try this search you’ll see approximately two pages of exposed consoles. A colleague I showed this to pondered: “How many still use weblogic/weblogic?” Good question. Scary, too.

Simon Vans-Colina wrote “Competitive Intelligence Gathering with Google” wherein he discusses two ways of gleaning which server a site is using. He demonstrated how to search for stacktraces which give very obvious clues as well as the console searching technique shown above.

It’s also important to realize that the results from the search above only found the unfortunate sites whose links were published in some way for Google to find them. Regardless of the search results, someone could go directly to your site and slap a “/console” on the end of your domain to see what happens.

These problems aren’t new and they’re not limited to WebLogic, of course. Any server software that exposes a web-based administration application is fair game. Blogging apps, CMS’s, you name it — they all expose an admin app. If you’re lucky, your application will provide some way to restrict access to the administrative functionality. This post is about securing WebLogic’s console.

You have two choices if you don’t want your server caught with its pants down:

  1. Disable the console application
  2. Use the administration port

Disabling the console application is a bit extreme. If you do that you can only administer your server with weblogic.Admin, WLST, or a custom JMX client. That all seems a bit too inconvenient so the second option is the way to go and will be described below.

Enabling the administration port prevents the masses from accessing your console while giving you access to the same. It’s the best of both worlds. With the administration port enabled:

  • The console is only accessible over a non-standard port (which should not be available from outside your firewall)
  • You have to use SSL
  • You get a dedicated administration listen thread
  • Administrative requests over any port other than the admin port are rejected

So, by using the admin port you protect your console and get several other nice side-effects. The admin port will not be on port 80 or 443 and will thus not be available outside your firewall. Technically, you could open the admin port on the firewall but then you’re back in the same boat. Also, anybody who tries to do an administrative request over any other port will find their request rejected.

Another feature is that your interaction with the console has to be over SSL which protects your data as it transits the wire. And really, who doesn’t love SSL? (Aside: When is the last time you heard of data compromise via an SSL attack? Pretty rare, indeed.)

Finally, using the admin port gets you a dedicated listen thread. What’s the big deal? Let’s say that you have 40 listen threads and the admin port is not enabled. A bug or poor resource utilization causes all 40 threads to be blocked or otherwise unavailable. If you want to get into the console and see what’s going on or fix the problem you’d be out of luck because there’s no thread available to do your work. That’s not good.

The outlook is sunnier when you’ve enabled the admin port, though. Now you have 40 normal listen threads and a dedicated admin thread. If all 40 normal threads are hung you still have a responsive thread for doing your investigation. Of course, you can also look at the separate thread benefit from the opposite perspective: An admin poking around on the console won’t consume a thread intended for satisfying user requests.

Now you’re clamoring for the admin port, right? πŸ˜‰ Let’s set it up.

A prerequisite is that you need to set up SSL normally which is well-documented by BEA. After that, access the console and do the following:

  1. Click on the first node under Domain Structure (it’s your domain name)
  2. Click Lock & Edit
  3. Select the Enable Administration port checkbox
  4. Specify the administration port
  5. Click Save
  6. Click Activate Changes

No restart is required and you’re automatically switched to using the admin port. By the way, the same page allows you to disable the console or change its context path.

Now that you’re using the admin port, if you attempt to access the console over the standard listen ports you’ll be greeted with

Console/Management requests can only be made through an administration channel

Unfortunately, this is also a fingerprint of WebLogic but at least your console is hidden behind your firewall. You can mitigate this fingerprinting to a degree by changing the context path for the console. For example, I changed mine to SecurityThroughObscurity and I would thus access the console via

https://localhost:7777/SecurityThroughObscurity

assuming 7777 is my admin port. Now, going to http://localhost/console will provide the inquisitive user with a nice 404–Not Found page.

The choice of “SecurityThroughObscurity” was obviously tongue-in-cheek but it does highlight the fact that changing the context path is not securing anything but it is making it harder to find. Every little bit helps, I guess.

There are some things of which you need to be aware if you’re running a cluster. See Administration Port and Administrative Channel for more information.

P.S. Don’t forget to change BOTH your username and password. People are on to weblogic/weblogic, I’m afraid… πŸ˜‰

Digital Signatures Explained

It’s fairly easy to get digital signatures working with web services. Just pull up the docs for your web service stack and follow the directions. Some configuration here and keystores there and you’re good to go.

But just what is happening under the covers? Digitally signing something might seem like magic but it’s rather simple conceptually even though it builds on some pretty heavy theory (mostly math, ugh!). However, in this post I’m going to talk about the concepts and leave the math to someone else.

What’s the Purpose of a Digital Signature?

Before we start decomposing the mechanics of signing data, let’s first consider what we want to use a signature for in the first place.

Data Integrity

A digital signature allows the receiver to check if the data has been altered since the sender signed it. This function is performed via a cryptographic hash which I’ll talk about later.

Verification of the Sender

Digital signatures use private and public keys. An entity (person or process) signs the message with his/its private key and then the recipient can use the entity’s public key to verify the source.

Hey, you got your Digital Signature in my SSL!

Do these two functions seem a little like what SSL does? You’re right! SSL provides those features for data in transit while a digital signature does the same thing at the message level. SSL and digital signatures don’t work in the exact same way but they do perform similar high-level functions. One interesting difference between the two is that the digital signature stays with the message even if it’s sitting in a queue or on disk somewhere (assuming that it’s not intentionally stripped at some point).

Try that, SSL!

I’d like to mention one more thing about SSL and signatures before digging out of this SSL rabbit hole: You can use signatures and SSL at the same time. Why might you want to do this? There are several reasons:

  • You want to encrypt your message over the wire. That’s SSL’s sweet spot.
  • Your message gets processed by multiple machines and you want each to verify the original sender of the message. SSL can’t do this past the first recipient since it’s a point-to-point protocol. The digital signature, on the other hand, travels with the message wherever it may go.

How Digital Signatures Work

Creating signed data is a two step process. The first step is to hash the data and the second step is to sign the hash. Both of these steps are cryptographic operations but neither actually encrypts the data. Fortunately, the Java API provides classes for doing these operations so we don’t have to write any of that complex stuff. We’ll see these APIs in action in a bit.

Let’s Hash it Out

There are several algorithms for generating one-way cryptographic hashes. You’ve probably heard of MD5 and SHA-1. These algorithms take any amount of data and convert it to a fixed length byte array that can’t be reversed. That is, given the hash, you can’t determine what the input was. Additionally, two different inputs will never generate the same hash.

NOTE: Technically, both of the previous assertions are not absolutely true. The time and computing power required for reversing a hash make it unlikely. The more likely case for "reversing" a hash is to leverage a pre-computed hash dictionary which I’ll discuss briefly later. Finally, there are so called "collisions" where two different inputs can create the same hash but this situation is extremely rare).

Because of these features, hash algorithms are often used for storing passwords. Take the user’s password, hash it, and then store it in LDAP or a database. You can’t guess the password from the hash so the stored passwords are reasonably secure from prying eyes. But when the user logs in, you hash the newly supplied password and compare it against the hash on file. If they match, the user is authenticated.

I bet you already knew that stuff. The cool thing is that’s the first half of generating a signature. Before we move on to the second half, let’s have a look at how to generate a hash using the Java APIs.

MessageDigest md = 
     MessageDigest.getInstance("SHA-1");

md.update("Corned beef hash".getBytes());

// Create the hash from the message
byte[] hash = md.digest();

The code above leverages the MessageDigest class for hashing data. "Digest" is another word for "hash." We tell the MessageDigest object that we want to use the SHA-1 algorithm and then feed it the data using the update() method. You can call that method repeatedly until all of your data is included in the hash. Then, simply call digest() to get the fixed-length hash.

Notice that the hash is actually a byte array which would create non-printable characters. So, to show you the hash for the input data I’ll first encode the hash like this:

String encodedHash = 
    new sun.misc.BASE64Encoder().encode(hash);

I know, I know. Don’t use the sun.* classes. I’m just saving you the trouble of not having to download something like Commons Codec in order to try this out. Just don’t use sun.* classes for production.

Anyway, now that the hash has been encoded, I can tell you that the hashed version of "Corned beef hash" is

TARd8ciquglqtzCGlhl/Ano8+kE=

Notice that the length of the hash is longer than the input. Let’s try hashing "Corned beef hash is not dog food"

H6exMsBveZPenXK756/i+ph1z8Q=

Notice that the length of the hash is still the same. It would even be the same length for a megabyte worth of text.

Before we move on to signing data, I’d like to mention one more thing about hashing. The one-way nature of a cryptographic hash is very useful but it can bite you. Since the same input always generates the same hash for a given algorithm, a bad guy who can get hold of your hashed data might be able to use a precomputed hash dictionary to determine your original text. It’s sort of like a reverse-lookup of the hash. For example, a hash dictionary will have the hashes for common passwords such as "Password" or "ABC123" and the bad guy can just query the hash to get the corresponding input.

Not good.

The remedy is to add some "salt" to the hash. A salt is just a bit of data that you’ll add to your input text when you compute the hash. This simply equates to another call to update() method. Only you know the value of this salt which acts like a simple pass key or password and negates the ability for a bad guy to determine the input data.

For example, the hash for "Corned beef hash" is TARd8ciquglqtzCGlhl/Ano8+kE= and always will be. The hash of "Corned beef hash" with a salt of "Pinch of Salt" is rEN7xxJPqyY7pkspLL902NkmJn0=. Obviously, the phrase "Pinch of Salt" would have to be kept secret.

Signing Data

Conceptually, we would now just sign the hash. However, with the Java API, the hashing is done for us as part of the signing process so we wouldn’t actually perform the steps above to generate the signature. Instead, we’d just use the Signature class.

Using the Signature class is a little more involved than hashing because we need a private key to actually do the signing. The corresponding public key would be used by the recipient to verify the signature later. Ideally, you would have your keys in a keystore and use them with the Signature class. For demonstration purposes I’m going to generate the keypair on the fly. Yes, I’m lazy but it also makes the pertinent signing machinery stand out better. Call it artistic license. πŸ˜‰

Here’s the sample code:

// Generate a keypair which 
// contains the private/public keys
KeyPairGenerator keyGen = 
    KeyPairGenerator.getInstance("DSA");
keyGen.initialize(1024, new SecureRandom());
KeyPair keyPair = keyGen.generateKeyPair();

// Sign some data
Signature sig = Signature.getInstance("DSA");
sig.initSign(keyPair.getPrivate());
sig.update("Sign on the dotted line".getBytes());
byte[] signedData = sig.sign();

The first group of code generates a sample keypair that, as shown here, will only live until it goes out of scope. It’s good enough for our purposes, though. The second group is where the action is. We tell the Signature object that we want to use the DSA (Digital Signature Algorithm) and then we load it with the private key to use as the signer. Add the text via update() just like we did for hashing and then call sign(). Just like before, we get back a byte array which is unprintable. After encoding the signed data, I can tell you that the signature for "Sign on the dotted line" looks like this:

MCwCFA1+YXhgSu0xCP6lhKVO9QH5DYbcAhRQ/V5i8czHiMxL7SnyLtafZNoL9A==

Now, the results here are a little trickier than when hashing. If you run this code, you’ll get a different encoded string than what’s shown here. It’ll be different for two reasons:

  1. You’re using a different private key
  2. You’re running it at a different time on different hardware

When I run it again with the same private key I get

MCwCFBlCOnD9MfPgTtDUohfh7z/TArU7AhRqXyeSHAzzW97+ha2V5d4RDfZq8w==

So, even though the lengths are the same the output is different. By the way, the length will be the same regardless of the input size.

Pretty cool, isn’t it? There’s a lot of stuff going on in those few lines of code. Now we have signed data but how does the recipient verify it?

Verifying Signed Data

Let’s say I send you an email that contains two lines. The first line is

Sign on the dotted line

and the second line is

MCwCFA1+YXhgSu0xCP6lhKVO9QH5DYbcAhRQ/V5i8czHiMxL7SnyLtafZNoL9A==

Obviously, these lines represent the message and the signed message, respectively.

To see if I REALLY said "Sign on the dotted line" you would mash together my public key, the message, and the signed message to see if they align. That’s imprecise language for the process of determining if, given the message, the private key associated with the public key would produce the signed message. It’s sort of equivalent to the process of checking passwords using a hash as described above except the keys have been added to the mix.

Here’s the code for doing just that:

Signature sig2 = Signature.getInstance("DSA");
sig2.initVerify(keyPair.getPublic());
sig2.update("Sign on the dotted line".getBytes());
 
boolean verified = 
sig2.verify(
"MCwCFA1+YXhgSu0xCP6lhKVO9QH5DYbcAhRQ/V5i8czHiMxL7SnyLtafZNoL9A=="
.getBytes());

As before, the first line tells the Signature object which signing algorithm to use and it has to match the algorithm that was used originally to sign the message. The second line loads the public key that matches the private key used to sign the data. (Important: The recipient does NOT and should not have your private key!)

We then load the message with the update() method. Finally, we pass the signed data to the verify() method. If it returns true, the signature is verified. Changing even one character in the message or signed data will cause verification to fail which is what you want. And obviously, specifying a public key that does not match the signer’s private key will fail, too.

To sum up verification, if the message is modified in any way, verification will fail. Somebody monkeyed with the data and you were able to detect it. That’s data integrity checking. If verification fails because a mismatched public key was used, then you know that someone other than who you expected signed the message. To my knowledge, you can’t distinguish between the two causes of verification failure.

Signing Off

Knowing the fundamentals of digital signatures will help you to understand things that build on them such as the XML Digital Signature specification or the Java implementation of it. Here’s a re-cap and a few other take-aways:

  • Signing cannot prevent changes to the message, but changes can be detected
  • You can detect change, but you can’t tell WHAT changed
  • The signed message can travel with the data or can be separate (for example, in the email scenario above, I could have sent you the signed data in a separate email and told you it went with the first message)
  • Encryption and signatures are not the same. With encryption, you intend for the original message to be recoverable at some point.
  • You can think of a signature as a very fancy checksum
  • In PKI, private and public keys are mathematically related. The private key is only needed for signing and the public key is only needed for verification.
  • A salt can be a constant (but still secret) value or it can be generated randomly with each use. In both cases the verifier must have access to the salt.

DBMS Security Providers

BEA’s Peter Laird recently wrote an excellent article entitled "WebLogic Security: Configuring the Database Authentication Providers (SQL, Custom, DBMS)." His post describes the following DBMS authentication providers that come with WLS 9 and later:

  • SQL Authentication provider
  • Read-Only SQL Authentication provider
  • Custom DBMS Authentication provider

Peter lays out the technical details of the providers as well as their differences. He then finishes with a SQL authenticator configuration walk-through.

I am surprised to see that he says that when choosing an authentication repository "…you are safest performance-wise with a database backed authentication store." I do agree that databases are typically well-understood by developers but I’d think that an LDAP server would kick the tail of a database in the speed department.

Anyway, that’s a tiny nitpick on an outstanding article. I encourage you to have a look.

I think I’m done gushing about Peter’s article but wait, there’s more! Turns out that Peter is the Managing Architect for the WebLogic Portal team. In the prequel to the above article he wrote "Discussion on WebLogic Security: Authentication Providers, Internal LDAP, JAAS, WebLogic Portal, Profile." This post is a set of fact-filled soundbites concerning Portal and security. If you do portal work you’ll want to have a look at this post, too.

XSS and Web Frameworks

Matt Raible recently blogged about Java Web Frameworks and XSS. The post and the comments are well worth reading. It’s easy to think (hope!) that a framework will automatically escape output to prevent XSS and give no more thought to it. As Matt’s post shows, you really need to know how your chosen framework deals with the issue.

If you use Struts 2 or WebWork be sure to read the post and update your libraries.

RoleManager Audit Events in WebLogic

Want to fill up your audit logs quickly? Set the auditor’s severity to INFORMATION and you’re well on your way. In this post we’ll take a closer look and see if the information gained is worthy of the disk space and processing time.

More is Better, Right?

It’s natural to expect that audit logs won’t be as "chatty" as application logging. After all, you’d typically only expect one or a handful of authorization events for each accessed resource. Application logging, on the other hand, might spew dozens of lines per request depending upon the logging level.

With this in mind, your security officer or well-meaning admin might see that the WebLogic DefaultAuditor is initially set to a severity of ERROR leaving not one but TWO severity levels untapped. More security data has to be good, right?

Not necessarily. Besides INFORMATION, the other severity level below ERROR is WARNING. I’ve never seen a WARNING event from the out-of-the-box providers. That’s not to say they don’t exist — just that I’ve never seen one. The INFORMATION severity is the lowest level which only seems to include a certain class of Role Manager events.

Role Manager audit events can be sourced from a Role Mapping provider or an Authorization provider. Useful Role Manager events can happen at the SUCCESS and FAILURE levels, but the INFORMATION-level events are highly repetitive and provide little bang for buck. Here are a couple of examples:

#### Audit Record Begin <Jun 26, 2007 9:20:01 PM> <Severity =INFORMATION> <<<Event Type = RoleManager Audit Event ><Subject: 2
Principal = class weblogic.security.principal.WLSUserImpl("weblogic")
Principal = class weblogic.security.principal.WLSGroupImpl("Administrators")
><<adm>><type=<adm>, category=Configuration><>>> Audit Record End ####

#### Audit Record Begin <Jun 26, 2007 9:20:01 PM> <Severity =INFORMATION> <<<Event Type = RoleManager Audit Event ><Subject: 2
Principal = class weblogic.security.principal.WLSUserImpl("weblogic")
Principal = class weblogic.security.principal.WLSGroupImpl("Administrators")
><<adm>><type=<adm>, category=Configuration><||Anonymous||Admin>>> Audit Record End ####

As you can see, there’s very little actionable information here. Yes, user "weblogic" did something but we’re not quite sure what.

Crunch the Numbers

To give you an idea of the volume of Role Manager events at the INFORMATION severity, I started up a WebLogic 8.1 domain which includes five custom applications. I then logged into console but went no further than the initial page. Here’s the breakdown of audit events (note that I’ve enabled configuration auditing):

Authentication: 2
Authorization: 6
AuthorizationPolicy Deploy: 25
Invoke Configuration: 1
RoleManager: 772
RoleManager Deploy: 3
Set Attribute: 10

As you can see, the RoleManager events account for 94%(!) of all events for my scenario. Hitting Refresh on the console caused approximately the same number of Role Manager events. I haven’t timed it, but writing all of those events to disk is probably quite measurable.

Console makes heavy use of JMX so I suspect a lot of the Role Manager events are caused by that. I tested a "normal" web app with just a protected page. Here are the results:

Authentication: 1
Authorization: 1
RoleManager: 14

Thus, for one request, the Role Manager events comprise 88% of the total number of events. The information is slightly different (and maybe even a little useful) as long as you don’t mind seeing it a bunch of times. Here are a couple events:

#### Audit Record Begin <Jun 26, 2007 10:44:08 PM> <Severity =INFORMATION> <<<Event Type = RoleManager Audit Event ><Subject: 2
Principal = class weblogic.security.principal.WLSUserImpl("weblogic")
Principal = class weblogic.security.principal.WLSGroupImpl("Administrators")
><<url>><type=<url>, application=ImplicitGroupsApp, contextPath=/implicitgroupsapp, uri=/users/users.jsp, httpMethod=GET><>>> Audit Record End ####

#### Audit Record Begin <Jun 26, 2007 10:44:08 PM> <Severity =INFORMATION> <<<Event Type = RoleManager Audit Event ><Subject: 2
Principal = class weblogic.security.principal.WLSUserImpl("weblogic")
Principal = class weblogic.security.principal.WLSGroupImpl("Administrators")
><<url>><type=<url>, application=ImplicitGroupsApp, contextPath=/implicitgroupsapp, uri=/users/users.jsp, httpMethod=GET><||user||Anonymous||everyone||Admin>>> Audit Record End ####

I suspect these are sourced by the authorization provider given that it’s showing the requested resource information. The list of roles is barely useful — which one is required?

Quantum Logging

If you decide to not use the INFORMATION severity you can still get the equivalent information from the audit log if you had to. The first thing to consider is the Authorization event. Here’s the event that accompanied the RoleManager event above:

#### Audit Record Begin <Jun 26, 2007 10:44:08 PM> <Severity =SUCCESS> <<<Event Type = Authorization Audit Event ><Subject: 2
Principal = class weblogic.security.principal.WLSUserImpl("weblogic")
Principal = class weblogic.security.principal.WLSGroupImpl("Administrators")
><ONCE><<url>><type=<url>, application=ImplicitGroupsApp, contextPath=/implicitgroupsapp, uri=/users/users.jsp, httpMethod=GET>>> Audit Record End ####

Notice that the resource information is identical to the equivalent RoleManager event.

How can you know which role was required for "/users/users.jsp?" One way is to check that application’s web.xml. However, that data could be newer than what was in place when the event was logged (e.g., web.xml was updated and the app was redeployed after the event).

A better way to do it is to find the most recent corresponding Authorization Policy Deploy event prior to the authorization event in question. For example,

#### Audit Record Begin <Jun 26, 2007 9:12:12 PM> <Severity =SUCCESS> <<<Event Type = Authorization Policy Deploy Audit Event ><Subject: 1
Principal = class weblogic.security.principal.WLSKernelIdentity("<WLS Kernel>")
><<url>><type=<url>, application=ImplicitGroupsApp, contextPath=/implicitgroupsapp, uri=/users/*, httpMethod=GET><user>>> Audit Record End ####

shows one of the policies for the ImplicitGroupsApp. Note that the policy applies to "/users/*" and requires the "user" role for URIs with that pattern.

This concludes our little romp through an audit log. If you choose to not select the INFORMATION severity you can save yourself considerable disk space while still retaining the ability to get the data you need.

check-auth-on-forward

What happens when a servlet (or JSP) forwards the user to a protected resource for which the user does not have authorization? According to the servlet specification, the user will see the protected resource. Surprise!

I checked the servlet specifications on this subject. Servlet 2.2 has no explicit mention of what happens during forwards or includes from a security perspective. Starting with Servlet 2.3, however, section SRV.12.2 explicitly states that declarative security does not apply to forwards and includes.

I’d prefer it to default the other way such that the container checks security for forwards and includes. Too bad for me, I guess. Fortunately, WebLogic meets the specification’s requirement by default but provides a way to check security if you want to enable it. To use it, add the following stanza to weblogic.xml:

<container-descriptor>
   <check-auth-on-forward/>
</container-descriptor>

Now, authorization will be checked for the target forward or include.

« Previous Entries  

Bookmark this page on del.icio.us