Quantcast
Channel: owsm – AMIS Oracle and Java Blog
Viewing all 11 articles
Browse latest View live

Using OWSM x509 token client policy with OSB 11gR1 PS3

$
0
0

Since 11GR1 Oracle Web Service Manager (OWSM) has been integrated with the SOA Suite. This means you can easily attach web service policies for security and management to your SOA Suite artifacts. In this post I will explain how to attach a x509 client policy and do the configurations  to get it actually working. This policy is the implementation of the OASIS Web Services Security X.509 Certificate Token Profile 1.1.

In short the configuration with consist of the following steps:

  1. Create a keystore with the certicicate
  2. Configure keystore /credentials in Enterprise Manager
  3. Attach policy to service
  4. Run

First we will create a keystore with a key pair (self signed certificate) in it. You can do this very easily with keytool.

keytool -genkeypair -keyalg RSA -dname “cn=sao-host.domain.local,dc=amis,dc= dc=nl” -alias signkey -keypass welcome1 -keystore testkeystore.jks -storepass welcome1 -validity 1064

There are other ways to create this. In some blog I read you have to create a certificate with the SubjectKeyIdentifier extension. You can only do this with openSSL. But in some other posts I have read openSSL certificates will not work. So confusion all over the place. For me creating the certificate this way worked. Especially for testing purposes this is sufficient. 

In the above command I highlighted the parts you need to remember. We need them later when we configure the keystore in EM and the policy in OSB.

Secondly comes the tricky part. Tricky in it is very badly documented and there are no examples. The documentation is scattered around the place and not very coherent. So it took me some time to figure this out.  You can see everything around security in Enterprise Manager is still in a transition phase. Parts of the security configuration are still in weblogic, parts can already be done through Enterprise Manager. This makes it somewhat confusing what to do where. The configuration of this particular OWSM policy can all do inside Enterprise Manager.

The directory DOMAIN_HOME\config\fmwconfig  is a very important directory for the configuration of security in Enterprise Manager. First we need to copy our keystore to this location. Amongst other files this directory now contains the following two important files:

  1. cwallet.sso
  2. testkeystore.jks (you have to copy the earlier created keystore to this directory)

cwallet.sso is the file based credential store used to store domain wide credentials. We will store credentials we need to get the policy working inside here later. testkeystore.jks is the keystore we just moved here.

First we make sure that EM/WSM will use the keystore we just created. So browse to the Security Provider Configuration and configure the keystore.

Press on the Configure button.

We need to set some defaults for the signature key and encryption key. This is mandatory. You can just enter some values , they do not have to exist inside the keystore as they are not checked here. When we later on execute the policy with a particular signature key we can be sure it uses this one and not the default configured one.

So what do we need to configure next? Before we continue I will first explain how OWSM policies, the keystore and the credential store work together.

If we take a look at the policy we are going to use we see three configuration properties of which  keystore.sig.csf.key is the most interesting one for use. We will only use signing. This key is by default mapped to the sign-csf-key key inside the credential store in the oracle.wsm.security map. So when the policy is executed it will use the username and password belonging to this key to retrieve the actual private key from the keystore to create the signature. It will use the username as the alias.  The keystore itself is protected with a password that is stored under the keystore-csf-key in the credential store. A dashed line means a default mapping.  

This is the default behavior. You can add your own csf keys to the credential store to map to different aliases in the keystore. Now you have the possibility to use multiple certificates instead of just one default one.

You can do the assignment of another csf key in multiple places. When you create a copy of the policy inside EM you can set the value of the keystore-sig-csf-key to your own key name. Or you can do it when you attach the policy to the OSB service inside OEPE or when you have deployed the service inside the OSB console.
I my case I do not want to use the defaults. So I add an additional key to the oracle.wsm.security map in the credential store containing the alias and its password I want to use for my policy.


I named it my. csf.key and filled in the alias signkey as  the user name  and welcome1 as the password.

The final step is to attach the policy to the OSB Service. I want to attach the policy to a business service. I need to select a service client policy in that case.

To attach the ws11_x509_token_with_message_protection_client_policy to my OSB service I make a version of an existing policy for my own use. I do not want to use any  encryption and I want to sign some additional headers instead of the default ones (WS Addressing headers only) . Otherwise I could have used the out-of-the-box policy.

I Changed the policy name and unchecked the encryption of the body on both the request and response. Furthermore I removed the ws addressing header signing. After this I saved the policy.
Attaching this new policy to your Business Service is very simple. On the policy tab of the business service I selected OWSM Policy from policy store. Then you can Add the policy by browsing the policy list that is retrieved from OWSM (Mds Store). Make sure you attach the OSB Configuration Project to your server first otherwise you get an error no server can be found to retrieve the policies from.

Now when I have deployed the OSB project I can browse to the service and set the correct signing key. So press Properties and the window below will be opened Enter my.sig.csf.key as the value for keystore.sig.csf.key and enter signkey as override value for keystore.recipient.alias. This alias is used to retrieve the  public key to encrypt outgoing messages.  Somehow this property is mandatory so we need to provide a valid value as it is being checked.  

Well that’s it…. When you test the service inside the osb console you will see the request is signed.


OWSM Custom Policies – Still some sharp edges, so beware! don't cut yourself.

$
0
0

In my last post I talked about using an out-of-the-box policy to sign your outgoing SOAP Message. Although it is not very well documented when you figure out how to configure the keystore and credential store it is quite simple to use. The problem is that the out-of-the-box policies need some tailoring before they can be used in the real world situations. Unfortunately I was only able to sign the entire body and not a specific element. What I needed was a more basic policy that only signs a specific element. So I needed to create a custom policy to do this. According to the documentation there is an API I can use, extend some classes and you can create your own policies. Simple, well in theory…

 

Image is copyrighted. Used with permission from DuraLabel.com


You need to extend oracle.wsm.policyengine.impl.AssertionExecutor and create your own CustomAssertion class. The class should contain some standard methods with standard handling. I just followed oracle’s recommendations. You subclass this class again and create the actual Assertion/Policy Execution class. The main method that will do all the work is the execute method. It has one input parameter of type SOAPBindingMessageContext (implementation of IContext Interface). Well that is all you get. The SOAP Message with some convenience methods to access the Transport headers, SOAP body. So where do I start?

My first challenge was figuring out how to access the credential store and the keystore. I want my custom policy to have a property containing the csf key the policy will use to retrieve a private/public keypair from the (file/JKS) keystore. There is a custom policy sample in the documentation showing how to retrieve a credential store service instance you can use to retrieve csf key values.

So I assumed I could use the same trick to retrieve a keystore service instance and retrieve my keypair. Well wrong… There is a keyStore service available but this will not retrieve the (default JKS) Keystore but some other keystores. These keystores reside in a keystores.xml file in the config/fmwconfig directory. The keyStore service’s getKeyStore method was talking about stripeNames I never heard of before; setting the authorization permissions to access the keystores similar to what I did for access the credential store service did not work, so I could not get it working. The KeyStore Service’s getProperties method was actually returning the right keystore properties (from the jps-config.xml file). So I concluded I could at least use this method to retrieve the name, location and other properties of the (default) keystore. To retrieve the keystore itself I had to figure out another way to do that.

Later on I found an alternative approach to retrieve a credentialstore and the keystore properties. After some browsing through the wsm and jps packages I found the oracle.wsm.security.jps.JpsManager class. This class has the methods getCredentialStore, returning a credential store, and the getKeyStoreConfig method, returning the keystore property map I described earlier. So you can use this class also. But why wasn’t this in any sample?

Now for the second part of my challenge, how to actually get a keypair from the keystore. I figured out two, of course undocumented approaches:
1. oracle.wsm.security.jps.WsmKeyStoreFactory.getKeyStore
2. oracle.security.jps.internal.common.util.KeyStoreUtil.loadKeystore
Although the latter method has a clearer signature, to my opinion, I still would recommend using the first one. The fact that the package name of the second class contains “internal” suggests it is not meant to be used by us mere mortals.

So now for the actual creation of the policy headers and signatures itself. Does wsm offer me documented convenience classes that help me easily build up my WS security headers and do the signing and encrypting. I could not figure this out. It is unclear what to use. I would have expected some guidance here on how to construct a policy in a performant way. Some best practices/do’s and don’ts so you don’t end up with a policy blocking all message processing on high loads.

This can only lead to one conclusion. You can build your own custom policies but keep them simple. Add some headers or do some logging but do not try to implement sophisticated policies without knowing the implications on performance. If you really need to build a lot of custom policies it might even be wise to look at some SOA appliances to do the work like layer7 or Vordel (I cannot advise DataPower as it is a little too blue for my taste ;-)  ). The amount of time and money you need to invest to create custom policies in OWSM could end up becoming more expensive than to buy a dedicated appliance optimized for this task only. If I would make a parallel building a custom policy now more resembles building your own car from scratch then building a custom car.

As a former employee of the Dutch distributor for Amberpoint in the Benelux I am somewhat surprised Oracle does not mention this product as an add-on or even alternative policy management and enforcement product. Amberpoint has extensive support for SOA management and security policies. Somehow Oracle decided to only market the business transaction monitoring functionalities of Amberpoint although it contains so much more. It could have extended their OWSM product so nicely. But perhaps it is too early for this, the merger is still ongoing. The fruits of this cooperation will reach us in due time.

So what can be done today? We can share our own experiences and best practices to make these custom policies usable for the masses. So write blogposts and share your code so we can all benefit. In an upcoming post I will talk about a custom policy that I have recently made.

.

OWSM Custom Assertion – Part 1 – Setting up the basic structure

$
0
0

With custom assertions you can create your own specific policies. There are a number of out-of-the box policy implementations already available implementing most of the common WS Security profiles and other non-security related policies like logging. If you want to create your own security policy one of the things you need is access to the credential store and keystore. There is some sample code on how to access the credential store. Unfortunately I could not find any sample code on how to access the keystore. In this blog I will show you how I implemented this using some of the available but not well documented Oracle utility classes.

The thing I want to do is:

Generate an abstract Assertion class that will give me the following basic functionality:

a)  Property handling:  My assertion will have properties I can set at the policy administration inside Enterprise Manager (EM). Setting properties there means their values will be the basic setting whenever a assertion is attached to a web service. I will call this design time. These property values can be overwritten when the assertion is actually assigned to a web service, for example in OSB. I will call this runtime overwriting.  This class will need to handle this.

b) Keystore access:  I want to be able to get private and public keys from the keystore I configured in EM.  One of my assertion properties will be a csf-key that contains the alias and password of the private key I will be using to sign (or encrypt I am not sure yet what I want my custom assertion to do) the web service SOAP message. So I will need to access the credential store also. Preferably through the jpsManager.  This is a utility class that can access the keystore configuration and credential store. Browsing through the oracle code this looks like the preferred way to do this.

The jpsManager has some useful methods to access the jps-config.xml file and the credential store:

method description
setAuthenticationMode You need to set this before you can use the jpsManager. I used the value anonymous.
getKeyStoreLevelCredentialStore Retrieves an instance of the credential store needed to get the csf-key values.
getKeyStoreConfig Get the keystore configuration as it is configured inside the jps-config.xml file.

Browsing through the oracle classes I found the oracle.wsm.security.policy.scenario.util.ScenarioUtils utility class offering some nice methods I can use as well:

method description
isJpsEnv used to check whether the jps configuration is active in this environment
getKeyStoreCredsFromCSF gets the username and password from the credential store using a csf-key
getConfigPropertyValue Retreives the value of a property. To do this it looks inside the MessageContext first. So it retrieves the runtime value first. Secondly it looks inside the design time properties, the properties that were loaded at initalization. Finally it looks in the jps configuration retrieved from the jps-config.xml.
getConfigPropertyRecipientCert Specialized version of the function above. This actually gets a certificate ‘object’.

In the init method of the CustomAssertion class an instance of the jpsManager is created. Secondly the design time assertion properties are loaded.

Each time this assertion is execute an instance of a WsmKeyStore is created. The instance needs to be created there as the csf-key for the signing and encryption alias/password credentials can be overwritten at runtime. The method that handles the retrieval of all parameters needed to create a WsmKeyStore and finally creating the instance is the setWsmStore method.

This results in the following:

[sourcecode language="java"]

package nl.amis.custompolicy;

import java.security.cert.X509Certificate;

import java.util.HashMap;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.logging.Level;
import java.util.logging.Logger;

import javax.xml.namespace.NamespaceContext;
import javax.xml.xpath.XPath;
import javax.xml.xpath.XPathConstants;
import javax.xml.xpath.XPathExpressionException;
import javax.xml.xpath.XPathFactory;

import oracle.security.jps.service.credstore.CredentialStore;

import oracle.wsm.common.sdk.IContext;
import oracle.wsm.common.sdk.IMessageContext;
import oracle.wsm.common.sdk.WSMException;
import oracle.wsm.policy.model.IAssertion;
import oracle.wsm.policy.model.IAssertionBindings;
import oracle.wsm.policy.model.IProperty;
import oracle.wsm.policy.model.impl.Config;
import oracle.wsm.policy.model.impl.SimpleAssertion;
import oracle.wsm.policyengine.IExecutionContext;
import oracle.wsm.policyengine.impl.AssertionExecutor;
import oracle.wsm.security.SecurityException;
import oracle.wsm.security.jps.JpsManager;
import oracle.wsm.security.jps.WsmKeyStore;
import oracle.wsm.security.jps.WsmKeyStoreFactory;
import oracle.wsm.security.policy.scenario.util.ScenarioUtils;
import oracle.wsm.security.policy.scenario.util.ScenarioUtils.Credentials;

import org.w3c.dom.Element;
import org.w3c.dom.Node;

public abstract class CustomAssertion extends AssertionExecutor {

private static final String CLASSNAME = CustomAssertion.class.getName();
private static final Logger TRACE = Logger.getLogger(CLASSNAME);
protected IAssertion mAssertion = null;
protected IExecutionContext mEcontext = null;
protected IContext mIcontext = null;
private JpsManager jpsManager;
private WsmKeyStore wsmKeyStore;
private Properties configProps;

public CustomAssertion(String tag) {
jpsManager = null;
wsmKeyStore = null;
configProps = new Properties();
}

public void destroy() {
}

public JpsManager getJpsManager() {
return jpsManager;
}

public WsmKeyStore getWsmKeyStore() {
return wsmKeyStore;
}

public Properties getConfigProperties() {
return configProps;
}

public void init(IAssertion iAssertion,  IExecutionContext iExecutionContext, IContext iContext) throws WSMException {
mAssertion = iAssertion;
mEcontext = iExecutionContext;
mIcontext = iContext;
try {
if (ScenarioUtils.isJpsEnv()) {
jpsManager = new JpsManager();
jpsManager.setAuthenticationMode("anonymous");
}
} catch (SecurityException e) {
throw new WSMException(e);
}
IAssertionBindings bindings = ((SimpleAssertion)(this.mAssertion)).getBindings();
if (bindings != null) {
List cfgl = bindings.getConfigs();
if (!cfgl.isEmpty()) {
Config cfg = (Config)cfgl.get(0);
List<IProperty> configProperties = cfg.getProperties();
if (configProperties != null) {
for (IProperty configProperty : configProperties) {
String propName = configProperty.getName();
String propValue = configProperty.getValue();
if (propValue == null || propValue.trim().isEmpty())
propValue = configProperty.getDefaultValue();
if (propValue != null)
configProps.setProperty(propName, propValue);
}
}
}
}
}

protected boolean setWsmKeyStore(IMessageContext msgContext) throws SecurityException {
// Retrieve Credential Store
CredentialStore credentialStore = jpsManager.getKeyStoreLevelCredentialStore();
if (credentialStore == null) {
throw new SecurityException("credentialstore not available Error");
}
// Retreive KeyStore Configuration from jps-config.xml
Map<String,String> keyStoreConfig = jpsManager.getKeyStoreConfig();
if (keyStoreConfig == null) {
throw new SecurityException("keystore configuration not available Error");
}
// Retreive Keystore Type from KeyStore Configuration
String keystoreType = keyStoreConfig.get("keystore.type");
if (keystoreType != null && keystoreType.trim().isEmpty()) {
throw new SecurityException("keystore type not set Error");
}
if (!WsmKeyStore.KEYSTORE_TYPES_ENUM.JKS.toString().equalsIgnoreCase(keystoreType)) {
throw new SecurityException("Only keystore of type JKS is supported");
}
// Retrieve Keystore location from KeyStore Configuration
String location = keyStoreConfig.get("location");
if (location != null && location.trim().isEmpty()) {
throw new SecurityException("keystore location not set Error");
}
// Retrieve Keystore CSF Map from KeyStore Configuration
String keystoreCSFMap = keyStoreConfig.get("keystore.csf.map");
if (keystoreCSFMap != null && keystoreCSFMap.trim().isEmpty()) {
throw new SecurityException("Keystore CSF Map not set Error");
}
// Retrieve Keystore csf key from KeyStore Configuration
String keyStorePassCSFKey = keyStoreConfig.get("keystore.pass.csf.key");
// Retrieve Keystore password from credential Store
String keyStorePassword = null;
if (keyStorePassCSFKey != null ) {
Credentials keystorePassCreds =
ScenarioUtils.getKeyStoreCredsFromCSF(keystoreCSFMap,
keyStorePassCSFKey,
credentialStore);
if (keystorePassCreds!= null)
keyStorePassword = new String(keystorePassCreds.getPassword());
}
// Retrieve signature csf key from KeyStore Configuration or design time or runtime properties
String keystoreSigCSFKey = ScenarioUtils.getConfigPropertyValue("keystore.sig.csf.key",
msgContext,
getConfigProperties(),
keyStoreConfig);
if (keystoreSigCSFKey != null && keystoreSigCSFKey.trim().isEmpty()) {
throw new SecurityException("signature csf key is empty");
}
// Retrieve signature alias and password from credential store
String signAlias = null;
String signPassword = null;
Credentials signCreds = ScenarioUtils.getKeyStoreCredsFromCSF(keystoreCSFMap,
keystoreSigCSFKey,
credentialStore);
if (signCreds != null) {
signPassword = new String(signCreds.getPassword());
signAlias = signCreds.getUsername();
}
// Retrieve encryption csf key from KeyStore Configuration or design time or runtime properties
String keystoreEncCSFKey = ScenarioUtils.getConfigPropertyValue("keystore.enc.csf.key",
msgContext,
getConfigProperties(),
keyStoreConfig);
if (keystoreEncCSFKey != null && keystoreEncCSFKey.trim().isEmpty()) {
throw new SecurityException("encryption csf key is empty");
}
// Retrieve encryption alias and password from credential store
String cryptAlias = null;
String cryptPassword = null;
Credentials cryptCreds = ScenarioUtils.getKeyStoreCredsFromCSF(keystoreCSFMap,
keystoreEncCSFKey,
credentialStore);
if (cryptCreds != null) {
cryptPassword = new String(cryptCreds.getPassword());
cryptAlias = cryptCreds.getUsername();
}
// Retrieve receipiant certificate from design time or run time properties
X509Certificate recipientCert =
ScenarioUtils.getConfigPropertyRecipientCert(msgContext,
getConfigProperties(),
null);
// Retrieve receipient alias from design time or runtime properties
String keystoreRecipientAlias =
ScenarioUtils.getConfigPropertyValue("keystore.recipient.alias",
msgContext,
getConfigProperties(), null);
if (keystoreRecipientAlias != null && keystoreRecipientAlias.trim().isEmpty()) {
throw new SecurityException("recipient alias is empty");
}

wsmKeyStore =
WsmKeyStoreFactory.getKeyStore(location, keystoreType, "keystore",
keyStorePassword, signAlias,
signPassword, cryptAlias,
cryptPassword, keystoreRecipientAlias,
recipientCert);
return wsmKeyStore != null;
}

public static Node getDataNode(Element payload,final HashMap<String, String> namespaces,String xpathStr) {
Node node = null;
try {
NamespaceContext ctx = new NamespaceContext() {
public String getNamespaceURI(String prefix) {
return namespaces.get(prefix);
}
public Iterator getPrefixes(String val) {
return null;
}
public String getPrefix(String uri) {
return null;
}
};
XPathFactory xpathFact = XPathFactory.newInstance();
XPath xpath = xpathFact.newXPath();
xpath.setNamespaceContext(ctx);
node = (Node)xpath.evaluate(xpathStr, payload, XPathConstants.NODE);
} catch (XPathExpressionException ex) {
ex.printStackTrace();
return null;
}
return node;
}
return node;
}
}
[/sourcecode]

In my next post I will create the actual assertion itself. I will create WS Security headers using Oracle utility classes for WS Security.

OWSM Custom x509 Assertion – Part 2 – Creating outgoing client assertion

$
0
0

In the previous post I explained how you can access the credential store and keystore using the configurations stored in the jsp-config.xml file. I also explained how you can read assertion properties. I put this code inside my base class CustomAssertion.java. This class has been repeated here below

[sourcecode language="java" collapse="true" autolinks="false"]

package nl.amis.custompolicy.simplex509;

import java.security.cert.X509Certificate;

import java.util.HashMap;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Properties;

import javax.xml.namespace.NamespaceContext;
import javax.xml.xpath.XPath;
import javax.xml.xpath.XPathConstants;
import javax.xml.xpath.XPathExpressionException;
import javax.xml.xpath.XPathFactory;

import oracle.security.jps.service.credstore.CredentialStore;

import oracle.wsm.common.sdk.IContext;
import oracle.wsm.common.sdk.IMessageContext;
import oracle.wsm.common.sdk.WSMException;
import oracle.wsm.policy.model.IAssertion;
import oracle.wsm.policy.model.IAssertionBindings;
import oracle.wsm.policy.model.IProperty;
import oracle.wsm.policy.model.impl.Config;
import oracle.wsm.policy.model.impl.SimpleAssertion;
import oracle.wsm.policyengine.IExecutionContext;
import oracle.wsm.policyengine.impl.AssertionExecutor;
import oracle.wsm.security.SecurityException;
import oracle.wsm.security.jps.JpsManager;
import oracle.wsm.security.jps.WsmKeyStore;
import oracle.wsm.security.jps.WsmKeyStoreFactory;
import oracle.wsm.security.policy.scenario.util.ScenarioUtils;
import oracle.wsm.security.policy.scenario.util.ScenarioUtils.Credentials;

import org.w3c.dom.Element;
import org.w3c.dom.Node;

public abstract class CustomAssertion extends AssertionExecutor {

protected IAssertion mAssertion = null;
protected IExecutionContext mEcontext = null;
protected IContext mIcontext = null;
private JpsManager jpsManager;
private WsmKeyStore wsmKeyStore;
private Properties configProps;

public CustomAssertion(String tag) {
jpsManager = null;
wsmKeyStore = null;
configProps = new Properties();
}

public void destroy() {
}

public JpsManager getJpsManager() {
return jpsManager;
}

public WsmKeyStore getWsmKeyStore() {
return wsmKeyStore;
}

public Properties getConfigProperties() {
return configProps;
}

public void init(IAssertion iAssertion,
IExecutionContext iExecutionContext,
IContext iContext) throws WSMException {
mAssertion = iAssertion;
mEcontext = iExecutionContext;
mIcontext = iContext;
try {
if (ScenarioUtils.isJpsEnv()) {
jpsManager = new JpsManager();
jpsManager.setAuthenticationMode("anonymous");
}
} catch (SecurityException e) {
throw new WSMException(e);
}
IAssertionBindings bindings =
((SimpleAssertion)(this.mAssertion)).getBindings();
if (bindings != null) {
List cfgl = bindings.getConfigs();
if (!cfgl.isEmpty()) {
Config cfg = (Config)cfgl.get(0);
List<IProperty> configProperties = cfg.getProperties();
if (configProperties != null) {
for (IProperty configProperty : configProperties) {
String propName = configProperty.getName();
String propValue = configProperty.getValue();
if (propValue == null || propValue.trim().isEmpty())
propValue = configProperty.getDefaultValue();
if (propValue != null)
configProps.setProperty(propName, propValue);
}
}
}
}
}

protected boolean setWsmKeyStore(IMessageContext msgContext) throws SecurityException {
// Controleren of keystore service er is.
if (jpsManager != null && !jpsManager.isKeyStoreServiceAvailable()) {
throw new SecurityException("keystore not available Error");
}
// OPHALEN CREDENTIALSTORE
CredentialStore credentialStore =
jpsManager.getKeyStoreLevelCredentialStore();
if (credentialStore == null) {
throw new SecurityException("credentialstore not available Error");
}
// OPHALEN KeyStoreConfig
Map<String, String> keyStoreConfig = jpsManager.getKeyStoreConfig();
if (keyStoreConfig == null) {
throw new SecurityException("keystore configuration not available Error");
}
// OPHALEN KEYSTORE TYPE
String keystoreType = keyStoreConfig.get("keystore.type");
if (keystoreType != null && keystoreType.trim().isEmpty()) {
throw new SecurityException("keystore type not set Error");
}
if (!WsmKeyStore.KEYSTORE_TYPES_ENUM.JKS.toString().equalsIgnoreCase(keystoreType)) {
throw new SecurityException("Only keystore of type JKS is supported");
}
// OPHALEN KEYSTORE PATH
String location = keyStoreConfig.get("location");
// OPHALEN KEYSTORE CSF MAP
String keystoreCSFMap = keyStoreConfig.get("keystore.csf.map");
// OPHALEN KEYSTORE PASSWORD UIT CREDENTIAL STORE
String keyStorePassword = null;
String keyStorePassCSFKey =
keyStoreConfig.get("keystore.pass.csf.key");
if (keyStorePassCSFKey != null) {
Credentials keystorePassCreds =
ScenarioUtils.getKeyStoreCredsFromCSF(keystoreCSFMap,
keyStorePassCSFKey,
credentialStore);
if (keystorePassCreds != null)
keyStorePassword = new String(keystorePassCreds.getPassword());
}
// Ophalen SIGNATURE CSF KEY
String keystoreSigCSFKey =
ScenarioUtils.getConfigPropertyValue("keystore.sig.csf.key",
msgContext,
getConfigProperties(),
keyStoreConfig);
if (keystoreSigCSFKey != null && keystoreSigCSFKey.trim().isEmpty()) {
throw new SecurityException("signature csf key is empty");
}
// Ophalen SIGNATURE ALIAS AND PASSWORD
String signAlias = null;
String signPassword = null;
Credentials signCreds =
ScenarioUtils.getKeyStoreCredsFromCSF(keystoreCSFMap,
keystoreSigCSFKey,
credentialStore);
if (signCreds != null) {
signPassword = new String(signCreds.getPassword());
signAlias = signCreds.getUsername();
}
// Ophalen ENCRYPTION CSF KEY
String keystoreEncCSFKey =
ScenarioUtils.getConfigPropertyValue("keystore.enc.csf.key",
msgContext,
getConfigProperties(),
keyStoreConfig);
if (keystoreEncCSFKey != null && keystoreEncCSFKey.trim().isEmpty()) {
throw new SecurityException("encryption csf key is empty");
}
// Ophalen ENCRYPTION ALIAS AND PASSWORD
String cryptAlias = null;
String cryptPassword = null;
Credentials cryptCreds =
ScenarioUtils.getKeyStoreCredsFromCSF(keystoreCSFMap,
keystoreEncCSFKey,
credentialStore);
if (null != cryptCreds) {
cryptPassword = new String(cryptCreds.getPassword());
cryptAlias = cryptCreds.getUsername();
}
X509Certificate recipientCert =
ScenarioUtils.getConfigPropertyRecipientCert(msgContext,
getConfigProperties(),
null);
String keystoreRecipientAlias =
ScenarioUtils.getConfigPropertyValue("keystore.recipient.alias",
msgContext,
getConfigProperties(), null);
if (keystoreRecipientAlias != null &&
keystoreRecipientAlias.trim().isEmpty()) {
throw new SecurityException("recipient alias is empty");
}

wsmKeyStore =
WsmKeyStoreFactory.getKeyStore(location, keystoreType, "keystore",
keyStorePassword, signAlias,
signPassword, cryptAlias,
cryptPassword,
keystoreRecipientAlias,
recipientCert);
return wsmKeyStore != null;
}

public static Node getDataNode(Element payload,
final HashMap<String, String> namespaces,
String xpathStr) {
Node node = null;

try {
NamespaceContext ctx = new NamespaceContext() {
public String getNamespaceURI(String prefix) {
return namespaces.get(prefix);
}

public Iterator getPrefixes(String val) {
return null;
}

public String getPrefix(String uri) {
return null;
}
};
XPathFactory xpathFact = XPathFactory.newInstance();
XPath xpath = xpathFact.newXPath();
xpath.setNamespaceContext(ctx);
node =
(Node)xpath.evaluate(xpathStr, payload, XPathConstants.NODE);
} catch (XPathExpressionException ex) {
ex.printStackTrace();
return null;
}
return node;
}
}
[/sourcecode]

In this post I will explore how complicated it is to create a WS Signing policy. This policy is pretty basic. It will sign the ws-addressing headers and the SOAP body of the request. It also adds and signs a timestamp. The timestamp and signatures of the response message are verified.

There is unfortunately not much information on how to use the WS Security packages supplied by Oracle. There is a very high level document I used as a starting point. This document describes the use of cryptographic building blocks Oracle provides to implement security. These APIs are part of the Oracle Security Developer Tools (OSDT).

I create a java project and added the following packages:

${MIDDLEWARE_HOME}\oracle_common\modules\oracle.osdt_11.1.1\osdt_xmlsec.jar
${MIDDLEWARE_HOME}\oracle_common\modules\oracle.osdt_11.1.1\osdt_wss.jar
${MIDDLEWARE_HOME}\oracle_common\modules\oracle.wsm.agent.common_11.1.1\wsm-agent-core.jar
${MIDDLEWARE_HOME}\oracle_common\modules\oracle.wsm.common_11.1.1\wsm-policy-core.jar
${MIDDLEWARE_HOME}\oracle_common\modules\oracle.jps_11.1.1\jps-api.jar

The main class used to sign my request is oracle.security.xmlsec.wss.WSSecurity. This class will sign the SOAP request. If you provide an array of ids of elements to sign (header and/or body elements) together with a binary securitytoken the signing happens automagically after calling the sign method.

The final result was not completely to my satisfaction:

  • I could not find an easy way of adding the mustUnderstand attribute to my security header. So I used the DOM Element method setAttributeNS to add the attribute directly.
  • The sample code used a method oracle.security.xmlsec.wss.util.WSSUtils.addWsuIdToElement to add a wsuId. When I used this method the wsu prefix was lost. So I added the wsuId attribute using the setAttributeNS method again.
  • The BinarySecurityToken is the last node within the WS-security SOAP header by default. To make it the first child I had to move it. I also wanted the WS security header itself as the first SOAP header so I had to move this one also.

Verifying the response message is also not very complicated. I first check the timestamp in the security header. If no timestamp is found an error is raised as my policy expects one.  Then I loop over all signatures inside the security and verify them. I expect a security header with at least one or more signed elements inside. You can image more complicated checks whether the SOAP response complies to a predefined security policy.

This resulted in the following assertion executer class:

[sourcecode language="java" autolinks="false"]
package nl.amis.custompolicy.simplex509;

import java.security.cert.X509Certificate;

import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import java.util.logging.Logger;

import javax.xml.namespace.QName;
import javax.xml.soap.SOAPBody;
import javax.xml.soap.SOAPEnvelope;
import javax.xml.soap.SOAPException;
import javax.xml.soap.SOAPHeader;
import javax.xml.soap.SOAPHeaderElement;
import javax.xml.soap.SOAPMessage;

import nl.amis.custompolicy.simplex509.util.Utilities;

import oracle.security.xmlsec.dsig.XSSignature;
import oracle.security.xmlsec.wss.WSSURI;
import oracle.security.xmlsec.wss.WSSecurity;
import oracle.security.xmlsec.wss.WSSecurityTokenReference;
import oracle.security.xmlsec.wss.util.WSSUtils;
import oracle.security.xmlsec.wss.util.WSSignatureParams;
import oracle.security.xmlsec.wss.x509.X509BinarySecurityToken;

import oracle.wsm.common.sdk.IContext;
import oracle.wsm.common.sdk.IMessageContext;
import oracle.wsm.common.sdk.IResult;
import oracle.wsm.common.sdk.Result;
import oracle.wsm.common.sdk.SOAPBindingMessageContext;
import oracle.wsm.common.sdk.WSMException;
import oracle.wsm.security.SecurityException;

public class SimpleX509AssertionExecutor extends CustomAssertion {

private static final String ns_wsa = "http://www.w3.org/2005/08/addressing";
private static final Logger TRACE = Logger.getLogger(SimpleX509AssertionExecutor.class.getName());

public SimpleX509AssertionExecutor() {
super("[SimpleX509AssertionExecutor] ");
}

public void destroy() {
}

public oracle.wsm.policyengine.IExecutionContext getExecutionContext() {
return this.econtext;
}

public String getAssertionName() {
return this.mAssertion.getQName().toString();
}

public IResult execute(IContext context) throws WSMException {
IResult result = new Result();
IMessageContext.STAGE stage = ((IMessageContext)context).getStage();
if (stage == IMessageContext.STAGE.request) {
try {
SOAPBindingMessageContext smc = ((SOAPBindingMessageContext)context);
// create WmsKeyStore
setWsmKeyStore((IMessageContext)context);
SOAPMessage msg = smc.getRequestMessage();
SOAPEnvelope env = msg.getSOAPPart().getEnvelope();
// You need to explicitly add the wsu prefix and namespace to the envelope.
env.addNamespaceDeclaration("wsu", WSSURI.ns_wsu);
// Now create a new <wsse:Security> Header
// newInstance will internally use SOAPHeader.addHeaderElement
WSSecurity ws = WSSecurity.newInstance(env);
// add mustUnderstand.
String prefix = env.getPrefix();
String ns_soap = env.getNamespaceURI();
ws.getElement().setAttributeNS(ns_soap, prefix + ":mustUnderstand", "1");

X509Certificate cert = getWsmKeyStore().getSignCert();
X509BinarySecurityToken x509Token = ws.createBST_X509(cert);
// remember to put this inside your WSSecurity header.
// addX509CertificateToken puts it at the beginning, you can also
// use a regular DOM method appendChild or insertChild to put it in.
ws.addX509CertificateToken(x509Token);
// optionally add an wsu:Id, so you can refer to it
x509Token.setWsuId("Cert");

// Create some security token references to this token
WSSecurityTokenReference str = ws.createSTR_X509_Ref("#Cert");
WSSignatureParams wsp = new WSSignatureParams(null, getWsmKeyStore().getSignKey());
// You need to set the STR that you have created earlier into this object;
wsp.setKeyInfoData(str);

// add timestamp
String wsuId = Utilities.addTimeStamp(ws);
// uris is an array of IDs to be signed.
ArrayList<String> uris = new ArrayList<String>();
uris.add(wsuId);

// signing addressing headers
SOAPHeader sHeader = msg.getSOAPHeader();
Iterator itr = sHeader.examineAllHeaderElements();
SOAPHeaderElement he = null;
do {
if (!itr.hasNext())
break;
he = (SOAPHeaderElement)itr.next();
if (he.getNamespaceURI().equals(ns_wsa)) {
wsuId = (new StringBuilder()).append("id-").append(oracle.security.xmlsec.util.XMLUtils.randomName()).toString();
he.setAttributeNS(WSSURI.ns_wsu, "wsu:Id", wsuId);
uris.add((new StringBuilder()).append("#").append(wsuId).toString());
}
} while (true);

// Signen van de body.
SOAPBody sBody = msg.getSOAPBody();
wsuId = (new StringBuilder()).append("id-").append(oracle.security.xmlsec.util.XMLUtils.randomName()).toString();
sBody.setAttributeNS(WSSURI.ns_wsu, "wsu:Id", wsuId);
uris.add((new StringBuilder()).append("#").append(wsuId).toString());

wsp.setSOAPMessage(msg);
// Now sign or encrypt some data (refer to following sections)
// These should use the above STRs
String urisArr[] = uris.toArray(new String[uris.size()]);
ws.sign(urisArr, wsp, null);

// put BinaryToken to front. Assume it only contains one BinaryToken
List btokens = ws.getBinaryTokens();
if (btokens != null && btokens.size() > 0) {
Iterator it = btokens.iterator();
do {
if (!it.hasNext())
break;
Object bo = it.next();
if ((bo instanceof X509BinarySecurityToken) &&
((X509BinarySecurityToken)bo).getWsuId().equals(x509Token.getWsuId())) {
X509BinarySecurityToken existingToken = (X509BinarySecurityToken)bo;
ws.removeChild(existingToken.getNode());
break;
}
} while (true);
}
WSSUtils.prependChild(ws, x509Token.getNode());

// Put the WS Security header in front of all other SOAP Headers
itr = sHeader.examineAllHeaderElements();
do {
if (!itr.hasNext())
break;
he = (SOAPHeaderElement)itr.next();
if (he.getNamespaceURI().equals(WSSURI.ns_wsu)) {
sHeader.removeChild(he);
}
} while (true);
Utilities.prependChild(sHeader, ws.getNode());

TRACE.fine("Finished");
result.setStatus(IResult.SUCCEEDED);
return result;
} catch (Exception e) {
throw new WSMException("Fault", e);
}
} else if (stage == IMessageContext.STAGE.response) {
try {
SOAPBindingMessageContext smc = ((SOAPBindingMessageContext)context);
SOAPMessage message = smc.getResponseMessage();
SOAPEnvelope soapenv = message.getSOAPPart().getEnvelope();
WSSecurity sec = Utilities.getSecurityHeader(soapenv);
if (sec==null) {
throw new SecurityException("WS Security header expected.");
}
if (!Utilities.validateTimestamp(sec.getTimestamp())) {
throw new SecurityException("Timestamp invalid");
}
List<XSSignature> sigs = sec.getSignatures();
if (sigs==null || sigs.isEmpty()) {
throw new SecurityException("Signed elements expected");
}
for (XSSignature signature : sigs) {
if (!sec.verify(signature)) {
throw new SecurityException("Signature invalid");
}
}
} catch (Exception e) {
throw new WSMException("Fault",e);
}
result.setStatus(IResult.SUCCEEDED);
return result;
}
result.setStatus(IResult.SUCCEEDED);
return result;
}

}
[/sourcecode]

I created a Utility class that contains a few utility functions. Most of them are pretty much self describing. The timestamp methods are pretty basic. They do not take into account any time skew between systems. You can add this (5 minutes is generally excepted) to make sure timestamps are not being wrongly rejected.

[sourcecode language="java" autolinks="false"]
package nl.amis.custompolicy.simplex509.util;

import java.util.Date;

import javax.xml.soap.SOAPEnvelope;
import javax.xml.soap.SOAPException;

import oracle.security.xmlsec.wss.WSSecurity;
import oracle.security.xmlsec.wss.WSUCreated;
import oracle.security.xmlsec.wss.WSUExpires;
import oracle.security.xmlsec.wss.WSUTimestamp;

import oracle.wsm.security.SecurityException;

import org.w3c.dom.Document;
import org.w3c.dom.Node;

public class Utilities {

public static String addTimeStamp(WSSecurity sec) {
Document doc = sec.getOwnerDocument();
WSUTimestamp ts = new WSUTimestamp(doc);
ts.setId((new StringBuilder()).append("Timestamp-").append(oracle.security.xmlsec.util.XMLUtils.randomName()).toString());
WSUCreated wsuCreated = new WSUCreated(doc);
Date now = new Date();
wsuCreated.setValue(now);
ts.setCreated(wsuCreated);
String expiryTime = "300";
WSUExpires expiry = new WSUExpires(doc);
expiry.setValue(new Date(now.getTime() +
(long)(Integer.parseInt(expiryTime) * 1000)));
ts.setExpires(expiry);
sec.setTimestamp(ts);
return (new StringBuilder()).append("#").append(ts.getWsuId()).toString();
}

public static void prependChild(Node parent, Node newChild) {
Node firstChild = parent.getFirstChild();
if (firstChild != null)
parent.insertBefore(newChild, firstChild);
else
parent.appendChild(newChild);
}

public static WSSecurity getSecurityHeader(SOAPEnvelope soapenv) throws SecurityException {
try {
WSSecurity wsSecs[] = WSSecurity.getAllSecurityHeaders(soapenv);
if (wsSecs == null || wsSecs.length == 0)
return null;
else
return wsSecs[0];
} catch (SOAPException e) {
throw new SecurityException(e);
}
}

public static boolean validateTimestamp(WSUTimestamp timestamp) {
WSUCreated created = timestamp.getCreated();
WSUExpires expires = timestamp.getExpires();
Date currentDate = new Date();
if (currentDate.after(expires.getValue()) ||
currentDate.before(created.getValue())) {
return false;
}
return true;
}

}
[/sourcecode]

How did I test the working of this code? First of all you need to put your assertion jar in the lib directory of the application server and restart the server again every time you want to test any changes. This makes your development cycle somewhat long and tedious. I wanted to use some logging to trace my program. Adding this up to the long development cycles I skipped this approach and figured out remote debugging in jdeveloper (I am used to Eclipse) so I could step through my code.

When I got a decent signed request I used SOAPUI to create a MockService that processes this request and returns a signed response my assertion could verify. This works like a charm. The documentation explain this nicely.

Review of Oracle Service Bus 11g Development Cookbook (Packt Publishing) by Edwin Biemond, Guido Schmutz, Eric Elzinga et. al.

$
0
0

Recently I gained access to an electronic copy of the just released Oracle Service Bus 11g Development Cookbook, written by five authors – all experts on OSB and three personal acquaintances of mine. I was very interested in learning about the final result after hearing many intermediate comments during the writing process as well as reading the occasional remark on Twitter. Knowing Guido, Eric and Edwin and assuming the same expert level for the other two authors, I anticipated a very interesting read.

Image

Below I will share my impressions from browsing through this solid 500+ page volume. Note: the homepage for the book can be found here: http://www.packtpub.com/oracle-service-bus-11g-development-cookbook/book .

Chapter 1 provides an introduction to OSB. It demonstrates development of a simple OSB Service – one that merely echoes the input – and then step by step increases the complexity by introducing the business service connecting to an external web services, using a routing step and finally bringing XQuery transformations and assign steps leveraing them into the picture. It provides a clear way to get going. A good refresher for readers with OSB experience and a good way to quickly get going for novice OSB users. The recipe based approach adopted by Packt for many of its publications does not work very well in this chapter in my opinion.

The second chapter – an overview of special operations and features in the OSB IDE (the Eclipse plugin) lends itself much better for this recipe style approach. This chapter shows how to move resources around within and across projects – useful to know and to look up when such operations are required. The introduction of the OSB Debugger in this chapter is valuable – I learned a trick or two from this overview. Chapter two also briefly introduced JDeveloper and its SOA Composite Editor used for SCA Composite applications that run in the SOA Suite. This introduction is relevant because working with JCA adapters – while supported in the OSB run time – requires JDeveloper for the design time configuration of the adapters (unless you are a real XML jock of course). The chapter demonstrates how the OSB project in Eclipse can be set up to work well with the JCA resources created in JDeveloper.

Chapter 3, 4 and 5 are a little bit alike – each discussing a special transport with (and for JEJB even through) the OSB. Chapter 3 explains the JMS transport or how OSB can use proxy services to be triggered by inbound traffic from (WebLogic) JMS destinations and business services for outbound messages. Chapter 4 does the same for EJB and JEJB – describing how a Proxy Services can be exposed as a (remote) stateless session bean and how a business service can be used to call out to a (remote) EJB. Chapter 5 explains HTTP as a transport, both with and without SOAP.

The discussion of JMS is thorough and clear. It includes both Queue and Topic and both non-durable and durable subscription. It introduces useful tools – QBrowser and Hermes in SoapUI – for working with JMS objects. The chapter explains the concept of the Request/Reply over JMS very well – demonstrating in detail the synchronous case and forward referencing the asynchronous case in a later chapter. The clear diagrams – used consistenly throughout the book – provide clear insight in the discussion:

Image

As part of the otherwise good description of message filtering, out of curiosity some indication of how content based message selection should be done would have made sense: message selectors are part of the JMS subscription created in WLS – but these only refer to (custom) properties and headers. What should one do if the content of the message determines whether or not to process the message? The JMS JCA Adapter – used with SOA Suite 11g – does not enter the discussion (as happens for the File Transport later on). I would be interested to learn whether the OSB JMS transport is richer or poorer in functionalitu than the JCA Adapter – and whether in the long run it may disappear and give way to the JCA Adapter or instead continue on. The chapter does not mention whether and how OSB can work with other JMS providers besides WebLogic. Given the fact that OSB runs on WLS, that may not be an extremely relevant point.

Chapter 4 introduces the EJB and JEJB transports. It makes quite clear how EJBs can be called from the OSB as well as how OSB proxy services can be published as EJB themselves. It goes the extra mile of explaining the concept of custom converters – to convert Java Objects not supported for conversion to XML by the JAX-RPC engine. It also explains how EJBs on a remote WLS domain can be invoked – just when I started wondering about that. After discussing the EJB (2.x or 3.0) transport, the chapter zooms in on the fairly recent JEJB transport, that allows pure Java objects to be moved around without the need for serialization into XML, providing a leaner and meaner mechanism when applicable. And again, just when I started wondering about manipulating the POJOs moved through OSB in this case, the chapter provides a very clear explanation of how the Java Callout can be used to have this manipulation performed in a custom Java code. It is well paced, clear and all relevant.

Somewhat surprising – the pure Java examples in this chapter are presented in the context of JDeveloper. With the OEPE OSB plugin running in Eclipse, one would have expected to see Eclipse used as the Java IDE as well.

The Http transport – subject of chapter 5 – had been used from the very beginning: the simplest OSB service created in chapter 1 used Http transport for a straightforward SOAP based WebService case. Chapter 5 continues from the SOAP over HTTP case and demonstrates how plain HTTP requests – with no SOAP envelopes but plain XML content or even just simple GET requests with associated URL query parameters – can be handled. The concept of RESTful services – services that are invoked over plain HTTP using the standard HTTP verbs GET, POST, PUT, DELETE and using meaningul, resource oriented URLs such as host:port/domain/service/resource/identifier – is discussed at length. Interpreting the special URL syntax, supporting various HTTP operations and working with query parameters is explained quite well. One important aspect of many RESTful services is not discussed at all – even though the term is mentioned in the discussion of the external Beer Service – and that is the use of JSON as the format for messages, rather than XML. It would have been useful and relevant to have this extra step explained or even referred to.

The last recipe in chapter 5 is on WebSockets – a fairly new and potentially revolutionary development for web applications. It is an intriguing topic – WebSocket communication through OSB with for example an HTML 5 client on one end and a real WebSocket server on the other. Unfortunately, I feel the topic is too grand for this book and the discussion in this chapter – while a brave attempt – leaves the reader with more questions than answers. Perhaps a brief overview and a reference to a more extensive on line document would have been better. As it stands, I am afraid the recipe does not add a lot of value.

Chapter 6 discusses the File and FTP transports, available in OSB for working with files, as well as the email transport. These first two transports can either trigger a proxy service when a file arrives on a watched location (inbound) or be used to write a file through a business service (outbound). SOA Suite 11g uses the JCA based File and FTP adapter for similar purposes and these adapters can also be used with OSB. Chapter 6 even describes how to use the JCA adapter for reading from a file somewhere in a message flow in a proxy service, because that is something the OSB File transport cannot do. Chapter 6 argues how the JCA adapters provide more functionality that the file and ftp transports in OSB. This makes me wonder why not always use the JCA adapters – apart from the hassle of using JDeveloper as a design time for these adapters. This question is not answered in the chapter – it only explains how the JCA adapters “provide far richer metadata about the file being processed” while “The File and FTP transports are not as feature-rich as the corresponding File and FTP JCA adapters”.

The discussion of the File transports is fairly complete, including the steps required to dynamically determine file name and location (for outbound) and a useful instruction on reading files in an XQuery script through the doc function.

The FTP transport is discussed in a similar vein. Another useful tool – CoreFTP – is introduced for getting a local FTP server up and running for development purposes. Note that the Oracle Database can also act as an FTP server – another interesting use case perhaps.

Triggering a proxy service through the reception of an email is a good use case, as is the ability to send an email from the OSB. The second part of chapter 6 – I am not sure why this topic did not get its own chapter as it is not all that much related to the file and ftp transports – handles about the email transport that provides exactly this functionality. The user is instructed on setting up Apache JAMES email server and Mozilla Thunderbird email client in preparation. Then the simple case (note that even this simple case is currently not supported with SOA Suite 11g) of receiving an email to trigger a proxy service is explained. Immediately the next recipe steps up the level of complexity (and relevancy) and discusses the processing of email attachments. The final recipe in this chapter demonstrates sending emails to static or dynamically determined addresses. I am not sure whether sending an email through the business service is the only way to ‘reply’ to an email that was received or if a proxy service that is triggered by the reception of an email can have a reponse pipeline that ends in the sending of a ‘reply’ to the original email, just like an asynchronous WebService or JMS Request/Response case. This is one of the few times that the book did not answer a question that popped up.

Interacting with the Database

Chapter 7 talks about the database or rather the communication from the OSB with the database. This type of communication was not natively supported in the BEA’s AquaLogic Service Bus through a special transport, so this communication is based entirely on the JCA Database Adapter that is well known in the SOA Suite. Most of this chapter is about how to configure the various inbound (polling) and outbound (perform SQL to read from or manipulate to tables) variations of the Database Adapter. The chapter does not discuss the option to invoke PL/SQL code (Stored Procedures) using the database adapter. Given the fact that frequently database access by OSB is implemented through a PL/SQL based API rather than directly to tables using pure SQL, I find that a surprising omission from an otherwise so throrough book.

Most of the discussion of the Database Adapter consists of screenshots from JDeveloper where composite applications are created with Database Adapter services that are configured using the wizard in JDeveloper’s SOA Composite Editor plugin. The resulting artifacts are subsequently used in Eclipse to implement JCA transport based business services (for outbound) or proxy services (for inbound, triggered from polling). The discussion of the JCA database adapter is thorough, including a useful explanation on the Detect Omissions Flag and a clear example of using a sequencing file to record which records have been polled and processed. Note that most of this discussion, however useful, is not specific to OSB.

In addition to the Database Adapter, the chapter also talks about the AQ Adapter – the Oracle Database advanced queuing counterpart to JMS. In fact, WebLogic JMS can use AQ as its underlying infrastructure. I am not sure even though AQ is part of the Oracle RDBMS, the discussion of the AQ Adapter is best done in the chapter on the database adapter. It feels closer to JMS – given the message and queue nature of both JMS and AQ. Well, that is the type of zealous sifting of a book that you expect a petty critic to do, isn’t it?

OSB and SOA Suite – having intimate relations

Chapter 8 talks about the interaction between OSB and SOA Suite 11g. Using the SOA-DIRECT transport, invoking a SCA composite synchronously or asynchronously from an OSB service can be performed efficiently and with a number of advanced features enabled. These include Remote Method Invocation (RMI), WS-Addressing, Identity propagation, Transaction propagation, Attachments, High availability and clustering support, Failover and load balancing. The chapter explains how to use the Direct Binding services (incoming) and references (outgoing) in SOA Suite 11g and the corresponding transport in OSB for both Proxy Services (for incoming requests from SOA Suite 11g) and Business Services (for calls to the SOA Suite 11g). The discussion is clear and to the point, but also purely technical. There is no explanation of why you may want to perform calls between OSB and SOA Suite and how the two could or should work together. The respective roles of OSB and SOA Suite, when both are around, is not touched upon. All we know from the chapter is how one can call the other. The consequences by the way for an end-to-end flow trace (through both OSB and SOA Suite) are not part of the chapter either. All in all, if all you want is have OSB and SOA Suite 11g (or vice versa) working together, this chapter will get you prepared for that task.

Complex Message Flows and Composite (orchestrated) Services

Chapter 9 is not about external interaction or internal transports. It is about more complex message flows in proxy services. When we did not have OSB, but only BPEL for complex services, all composite services were nails for our BPEL hammer. With OSB added to the tool box, we have a second option for creating complex, composite services. This chapter helps to bring OSB services to the higher, coarser grained level where one proxy service can invoke multiple services – instead of just the one business service – in a single request/response flow. In short – for those who know BPEL – to bring OSB Services to a level very similar to BPEL.

The specific recipes in this chapter discuss:

  • the Service Callout action – a synchronous invocation of an external service, very similar to a BPEL Invoke activity.
  • the Publish action – a non-blocking call to a one-way service (or even a two way service whose answer is not waited for nor processed)
  • the Java Callout action – to invoke a method in a custom Java class
  • the use of custom XPath functions implemented using custom Java classes
  • the ForEach action to perform one or more actions multiple times in a iterative, sequential loop
  • the Split-Join to perform one or more actions multiple times and in parallel
  • the Validate action to perform XSD based validation on messages or parts of messages
  • use of private Proxy Services

I found this chapter good to read and useful to consume. For example the recipe where the Service Callout is applied for message enrichment, including the explanation of extending the XQuery transformation with the enrichment results from the service invoked through the call out. The discussion of the Java Callout is fine. However, it seems that the author of this chapter forgot that Java Callout was already discussed in Chapter 4 for the JEJB transport and no references are made to it. I also would have expected some reference to JAXB; OSB uses XMLBeans, a less well known alternative to the standard defined by JAXB for XML to Java unmarshalling; is that a temporary situation and will JAXB be adopted? Or is the OSB continuing its use of XMLBeans?

Custom XPath functions – implemented in custom Java classes that have to be registered in an XML file in a special folder on the OSB Server – are described very clearly, as is their use in an OSB project inside an XQuery transformation.

ForEach is fairly straightforward to use. Its discussion is crisp, clear and to the point. Its parallel-processing-enabled cousing the Split-Join is a far more complex feature. Its discussion is okay. It feels like it is not doing it enough justice – but I cannot really point out what the issue is I see with it. In this section I ran into one of the few – but funny – spelling errors: the split-joint (that’s what you get from all those Dutch guys participating on a book I suppose).

Image

Validate is the one but last topic of the chapter. It is simple enough. Its discussion is decorated with a nice example of using a JMX exposed MBean that can be used to manipulate settings immediately impacting the OSB Service behavior at run time time through for example the JConsole. The last subject is the concept of private proxy services that can only be used by other OSB Services. The book argues successfully how these private services can be used to create reusable blocks of processing logic; the example in the chapter is a simple private logging service that is efficiently invoked through Publish actions in other services.

Reliability

Chapter 10 does not so much add new functional features but instead discusses the non-functional area of reliability. The use of global transactions (and the consequence of actions outside those transactions) are discussed, JMS message persistence as well as the usage of the JMS message redelivery option to control the number of attempts to continue to deliver failed messages. Note that most of this discussion is not OSB specific but generic WebLogic JMS theory. I like the explanation about XA (Global Transactions). It is very instructive. The discussion focuses only on JMS – which is a pity as XA concerns more transports and components than just JMS.

The last topics in this chapter are not simple ones: reliable messaging for WebService interactions using WS-RM (one of the WS* specifications) and using SOAP over JMS. The use of WS-RM is made clear. However, it is a little bit confusing that in this chapter the OSB service has a WLS 9 policy assigned to it when in Chapters 11 and 12 we are warned against using these ‘deprecated’ policies and picking the modern OWSM policies. Subsequently, a Java Client to invoke this OSB Proxy Service with WS-RM policy applied to it is developed. In JDeveloper rather than in Eclipse. It is very easy to apply Oracle OWMS security and QoS policies to web service proxies in JDeveloper, so that is probably the explanation for this. It is a fairly difficult topic and the author makes valiant attempt to demonstrate what is going on. It still requires a lot of focus to carefully read and process this section of the book.

Security

Chapters 11 and 12 discuss security. First message level security – through authentication and encryption using several OWSM security policies attached to the proxy service and through message access control – user or role based authorization specified through an access list defined through the OSB Console for a proxy service. After securing OSB proxy services, we also get instructions on calling out to secure services from OSB by attaching security policies to business services. The various OWSM policies and the required set up to get authentication, certificates and encryption working is very good – especially given the nature of this topic and the lack of very clear documentation in this area. A job well done!

The last chapter – transport security – is easier on the brain. The use of SSL to implement transport layer security is largely outside the scope of OSB. However, the first recipe reiterates basic authentication – already discussed in chapter 11 – and adds OSB Service Accounts to the overall picture. The next topic is the configuration of WebLogic Server in order to allow communication to take place over SSL. When this is set up, we can configure proxy services to only accept requests over SSL. This is clearly demonstrated. How to invoke an external service over SSL from a business service is not discussed. And with that recipe, the book ends- a little abruptly. No final summary or conclusions, no suggested further reading or supporting resources. Only an index and that is all folks.

What is not in this book?

Some areas around OSB that I had considered relevant for this book that were largely or even entirely lacking from it – no complaint, just stating a fact – include:

  • reporting, tracing, alerting, SLA management and other administration oriented topics
  • throttling, caching and load balancing
  • tuning, scalability
  • designing the application architecture and deciding how to engage the OSB in a larger picture
  • OSB Service development through the OSB Console (the console only makes a few very brief appearances)
  • best practices – most recipes show what can be done, but do not discuss what is the best approach for specific functional requirements or the usage of certain features

Conclusion

One of the things I really like about this book that it seems like the authors can read my mind. Just when I start wondering after reading a certain section ‘now how would I be able to do this next step or this (more complex) variation of what was discussed before’, they seem to move on with answering exactly that question. Many books seem to provide only the basic example without going the extra mile that you will need in the real world. This book in contrast does exactly that: it does discuss and demonstrate many of the variations and intricacies the real world will require of us.

The book explains the OSB in detail and also describes how to make the OSB and SOA Suite 11g (SCA container) work together. It stays at the operational level and does not really discuss architectural considerations. Questions like “why to use one product or feature instead of another one”, “how to set up governance for OSB artifacts” or “how to design and implement a multi tier architecture for specific purposes such as data integration, mobile applications or B2B interaction” are not the scope of the book. It is understandable – and yet I feel that it would have added a lot of value. The sheer knowledge the five authors bring to the table would have made for great discussions at tactical level too. Maybe they preserve that for a next book. I would certainly welcome that.

The preface to the book describes the intended audience: “If you are an intermediate SOA developer who is using Oracle Service Bus to develop service and message-orientated applications, then this book is for you. The book assumes that you have a working knowledge of fundamental SOA concepts and Oracle Service Bus.” I think this is exactly right. Getting started with OSB through this book might be a bit tough – you need a slightly more gradual introduction than this. However, once you done some introductory tutorials and perhaps have talked to more seasoned OSB developers and built up some initial feeling, this book is perfect to propel you to a level where you can quickly become productive in a broad scope of tasks. Most of the activities you will have to perform as an OSB developer are discussed in this book. You need additional resources to learn about XML, WebServices (WSDL, SOAP) and XQuery and depending on what you need to do you may have a need for SQL and/or Java skills but for the work you do with OSB itself – it is bull’s eye!

I hope for the authors that the 12c release of OSB will be off for some time, so this book has its deserved time in the lime light. I also hope on the readers behalf that this band of authors will reconvene to produce a 12c version of their book once that release hits the streets. I congratulate them with their work.

I have enjoyed reading this book, I have learned several useful things from it and I am sure that many developers – either just starting out with OSB or already working with it for quite some time – will benefit from it. It contains many recipes that will prove useful to re-read once a specific requirement needs to be implemented. And it provides quite some inspiration for trying out things – something I really like in a book.

SOA Suite 12c: Invoke Enterprise Scheduler Service from a BPEL process to submit a job request

$
0
0

The Fusion Middleware 12.1.3 platform contains the ESS or Enterprise Scheduler Service. This service can be used as an asynchronous, schedule based job orchestrator. It can execute jobs that are Operating System jobs, Java calls (local Java or EJB), PL/SQL calls, and Web Service calls (synchronous, asynchronous and one-way) including SOA composite, Service Bus and ADF BC web services.

Jobs and schedules can be defined from client applications through a  Java API or through the Enterprise Manager FMW Control user interface. Additionally, ESS exposes a web service through which (pre defined) jobs can be scheduled. This web service can be invoked from BPEL processes in SOA composites. In this article I will briefly demonstrate how to do the latter: submit a request to the Enterprise Scheduler Service to execute a job according to a specified schedule.

Because the job cannot be executed anonymously, the ESS Scheduler Service has an attached WSM policy to enforce credentials to be passed in. As a consequence, the SOA composite that invokes the service needs to have a WSM policy attached to the reference binding for the ESS Service in order to provide those required credentials. This article explains how to do that.

Steps:

  • Preparation: create an ESS Job Definition and a Schedule – in my example these are SendFlightUpdateNotification (which invokes a SOA composite to send an email) and Every5Minutes
  • Ensure that the ESS Scheduler Web Service has a WSM security policy attached to enforce authentication details to be provided (see description in this article: FMW 12.1.3 – Invoking Enterprise Scheduler Service Web Services from SoapUI)
  • Create a SOA composite application with a one way BPEL process exposed as a SOAP Web Service
  • Add a Schedule Job activity to the BPEL process and configure it to request the SendFlightUpdateNotification according to the Every5Minutes schedule; pass the input to the BPEL process as the application property for the job
  • Set a WSDL URL for a concrete WSDL – instead of the abstract one that is configured by default for the ESS Service
  • Attach a WSM security policy to the Reference Binding for the ESS Scheduler Web Service
  • Configure username and password as properties in composite.xml file – to provide the authentication details used by the policy and passed in security headers
  • Deploy and Test

 

Preparation: create an ESS Job Definition and a Schedule

in my example these are SendFlightUpdateNotification (which invokes a SOA composite to send an email)

image

and Every5Minutes

image

 

Ensure that the ESS Scheduler Web Service has a WSM security policy attached

to enforce authentication details to be provided (see description in this article: FMW 12.1.3 – Invoking Enterprise Scheduler Service Web Services from SoapUI)

image

Create a SOA composite application

with a one way BPEL process exposed as a SOAP Web Service

image

Add a Schedule Job activity to the BPEL process

image

and configure it to request the SendFlightUpdateNotification according to the Every5Minutes schedule;

image

image

Leave open the start time and end time (these are inherited now from the schedule)

SNAGHTML62b8333

Open the tab application properties.

SNAGHTML62bc65a
Here we can override the default values for Job application properties with values taken for example from the BPEL process instance variables:

image

SNAGHTML62ce36c

 

note: in order to select the Job and Schedule, you need to create a database MDS connection to the MDS partition with the ESS User Meta Data

SNAGHTML62abfb6

 

When you close the Schedule Job definition, you will probably see this warning:

image

Click OK to acknowledge the message. We will soon replace the WSDL URL on the reference binding to correct this problem.

The BPEL process now looks like this:

image

Set a concrete WSDL URL on the Reference Binding for the ESS Service

Get hold of the URL for the WSDL for the live ESS Web Service.

image

image

image

image

Then right click the ESS Service Reference Binding and select Edit from the menu. Set the WSDL URL in the field in the Update Reference dialog.

 

image

Attach a WSM security policy to the Reference Binding for the ESS Scheduler Web Service

Because the ESS Scheduler Web Service is protected by a WSM Security Policy, it requires callers to pass the appropriate WS Security Header. We can simply attach a WSM policy [of our own]to achieve that effect. We can even do so through EM FMW Control, in the run time environment, rather than right here at design time. But this time we will go for the design time, developer route.

Right click the EssService reference binding. Select Configure SOA WS Policies | For Request from the menu.

image

The dialog for configuring SOA WS Policies appears. Click on the plus icon for the Security category. From the list of security policies, select oracle/wss_username_token_client_policy. Then press OK.

image

The policy is attached to the reference binding.

SNAGHTML66e5071

Press OK again.

What we have configured at this point will cause the OWSM framework to intercept the call from our SOA composite to the EssService and inject WS Security policies into it. Or at least, that is what it would like to do. But the policy framework needs access to credentials to put in the WS Security header. The normal approach with this is for the policy framework to inspect the configured credential store for the username and password to use. The default credential store is called basic.credentials,  but you can specify on the policy that it should a different credential store. See this article for more details: http://biemond.blogspot.nl/2010/08/http-basic-authentication-with-soa.html .

There is a short cut however, that we will use here. Instead of using a credential store, our security policy can also simply use a username and password that are configured as properties on the reference binding to which the policy is attached. For the purpose of this article, that is far more convenient.

Click on the reference binding once more. Locate the section Composite Properties | Binding Properties in the properties palette, as shown here.

image

Click on the green plus icon to add a new property. Its name is oracle.webservices.auth.username and the value is for example weblogic. Then add a second property, called oracle.webservices.auth.password and set its value:

SNAGHTML6760e82

You will notice that these two properties are not displayed in the property palette. However annoying that is, it is not a problem: the properties are added to the composite.xml file all the same:

image

Deploy and Test

The work is done. Time to deploy the SOA composite to the run time.

Then invoke the service it exposes:

image

Wait for the response

image

and inspect the audit trail:

image

When we drill down into the flow trace and inspect the BPEL audit details, we will find the response from the ESS service – that contains the request identifier:

image

At this point apparently a successful job request submission has taken place with ESS. Let’s check in the ESS console:

image

Job request 605 has spawned 606 that is currently waiting:

image

A little later, the job request 606 is executed:

image

We can inspect the flow trace that was the result of this job execution:

image

Note that there no link with the original SOA composite that invoked the scheduler service to start the job that now result in this second SOA composite instance.

After making two calls to the SOA composite that makes the call to the scheduler and waiting a little, the effects are visible of a job that executes every five minutes (and that is started twice):

image

The post SOA Suite 12c: Invoke Enterprise Scheduler Service from a BPEL process to submit a job request appeared first on AMIS Blog.

Oracle SOA Suite 12c – Create, Deploy, Attach and Configure a Custom OWSM Policy – to report on service execution

$
0
0
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

This article describes how to develop a straightforward custom assertion that can be used as part of custom OWSM policy to be attached to Web Services in WebLogic, such as services exposed by SOA Composite applications and Service Bus projects as well as custom JAX-WS or ADF BC Web Services. The custom assertion that I demonstrate here reports the execution of web service operations to a JMS Destination and/or the system output. It shows how to access property values set on the policy binding (values specific for the service the policy is attached to) and how to inspect the headers and contents of the request and response messages. Most custom assertions will use a subset of the mechanisms shown in this example. As always, the source code is available for download. Note: this article was edited on April 6th to reflect better code structure.

Custom assertions can be used in policies that are applied to web services. Depending on the type and configuration of the policy and assertions, they can be triggered at different moments and perform different tasks. These assertions are similar to aspects (in AOP) that take care of cross cutting concerns and that do not interfere with the internals of a service. Policies are attached (and detached) at runtime by the administrators. The assertion discussed in this article is to be attached to the service binding at the inbound end of a SOA composite application (or at a Service Bus proxy service that serves the same purpose). The assertion will report every incoming request as well as each response returned from the service binding. This information can be leveraged outside the scope of this article to monitor the runtime service environment.

The steps describes in this article in the process of creating and putting into action the custom assertion are:

  • Create Custom Policy:
    • Assertion Java Class
    • Policy XML File
    • Policy Configuration XML File
  • Deploy Policy Artifacts to Runtime Fusion Middleware platform (and restart the WebLogic Servers)
  • Import Policy Definition into Runtime Fusion Middleware platform
  • Attach the Policy to a Service Binding in an existing SOA Composite application and configure the policy binding properties
  • Invoke the service exposed by the [Service Binding in the existing] SOA Composite application
  • Verify the results produced by the policy attachment

Create the Custom Policy

The main part of the custom assertion definition is a Java class. See for details the sources that can be downloaded from GitHub.The project contains a helper class – CustomAssertion – that takes care of some generic plumbing that are required for the AssertionExecutor superclass that needs to be extended. The class SOASuiteServiceExecutionReporter contains the custom logic that is to be executed whenever the policy assertion is triggered. In the current case, this logic consists of retrieving some key elements about the service request – service name, operation name, ECID, timestamp and selected payload details – and reporting them. Initially, this report consists of a few lines in the system output (i.e. the domain log file). Later on, we will send the report to a JMS destination.

The init() method is invoked by the OWSM framework when the policy is attached to a web service and whenever the configuration of the policy attachment is updated (i.e. its property values are changed). The init() method reads and processes the policy attachment configuration and initializes the SOASuiteServiceExecutionReporter, priming it for the correct actions whenever service executions trigger its execute method.

image

This code snippet relies heavily on the super class (CustomAssertion ) that returns the values for the properties from the iAssertion.

image

It also leverages the method initializeMessageTypesMapFromJson. This method performs the parsing of the operationMap property in the policy binding configuration. The properties are defined in the policy definition file – see below – and are set to binding specific values in the EM FMW control (or from WLST).

Properties can be simple string values. By using JSON snippets for the values of these properties, we can pass quite complex and extensive data structures into the policy attachment. In the current case, we use a JSON style property to specify for a policy binding which message types are processed ; each message type is a key in the JSON object and for each message type are defined: the name of the operation, an indication of the operation one-way is and an XPath expression to derive a value from the message payload to be reported.

This JSON structure looks like this- here message type getFlightDetailsRequest is mapped to operation getFlightDetails; from the request message, the value of element /soap:Envelope/soap:Body/flig:getFlightDetailsRequest/flig:FlightCode/com:CarrierCode should be reported:

image

The parsing of the JSON property is done using standard JSON-P support in this case, in the helper class ServiceReporterSettings:

image

In this code snippet, the JSON structure of the operationsMap property is parsed, interpreted and turned into a corresponding set of Java Objects. The data structures and class definitions are outlined in the next illustration:

image

 

 

The Execute method – processing every service execution

The execute method is invoked when the service receives a request or returns a response or a fault. The method gets passed in an IContext object. This object provides access to most relevant details about the request or response message – including the complete SOAP Envelope and the Transport Headers. Note that the GUID attribute contains the FMW ECID attribute value; the value is the same for the request message and the corresponding response (or fault) message.

image

 

One aspect of the custom assertion is the determination of the message type that is handled. The message type is read from the SOAP Body:

image

Here we use the getDataNode() helper method that is used to execute XPath queries against the mBody element, to derive the first child node within the SOAP Body.

When payload elements are to be extracted, this is done in a similar fashion:

image

The report on the service execution is created like this:

image

The policy can be attached to one service or  – more common – to may services. Each policy attachment (aka policy binding) can be configured with property values that are specific for the service and how the policy should act in the context of the service.

The file SOASuiteServiceExecutionReporterPolicyFile.xml contains the definition of the custom policy. This file is deployed to the runtime environment and also uploaded to the FMW Control, as we will see later on. This file defines the policy, its meta data including its properties etc.

image

The file policy-config.xml is another link in the chain. It joins the policy definition from the previous file with the Java Class.

image

 

 

Deploy Policy Artifacts to Fusion Middleware Infrastructure

Create a deployment profile (simple Java Archive) for the JDeveloper project. Deploy the project to a JAR file using this profile.

image

Copy JAR file to the WLS DOMAIN\lib directory.

Using the target information in the EM FMW Control, I find out about the exact the file location for the WebLogic domain that hosts the SOA Suite:

image

the lib directory under this domain home is where the jar file should be moved.

Restart the WebLogic domain.

 

Import Policy Definition into Fusion Middleware Infrastructure

Start EM FMW Control.

image

navigate to Web Logic Domain – soa_domain | Web Services | WSM Policies.

Click on Import

image

Import zip file with appropriate structure (this means it should contain a folder structure of META-INF\policies\some-custom-folder-name\policyname.xml:

image

by clicking on Import

image

and selecting the right zip file:

image

The report back:

image

and the policy is listed:

image

Details:

note that the policy is enabled, local optimization is off and the policy applies to service bindings (not SCA components, although that could be an option too) and is in the category Service Endpoint

image

and on the assertion:

image

The policy is ready for attachment to service bindings.

 

Attach Policy to SOA Composite Service Bindings

Open the SOA Composite, such as the FlightService composite shown below. Click on the Service Binding to which the policy is to be attached:

image

Open the Policies tab:

image

Click on the button Attach/Detach to open the dialog where policies can be attached to the service binding.

image 

Select the amis/monitoring policy. Click on Attach to bind this policy to the service binding.

Click on OK to confirm the policy attachment.

Click on Override Policy Configuration to set the property values that apply specifically to this policy attachment:

image

The properties that are defined in the policy configuration file – SOASuiteServiceExecutionReporterPolicyFile.xml – are listed and the current values are shown. These values can now be overridden for this attachment of the policy to the FlightService.

image

Click on Apply to confirm the property values.

At this point, the policy is primed for action for the FlightService.

Test the Custom Policy Activity

By invoking the various FlightService operations, we can now see the policy in action.

image

The effect of this call is reported by the custom policy in the log-file:

image

A call to another operation results in a similar report:

image

in the log file:

 

image

Note: even services to which the policy is attached without any additional configuration override will have their execution reported. However, these reports obviously cannot report the operation name (only the message type) nor any values from the payload. Here a report from the ConversionService that has the policy attached – without any configuration.

image

Resources

JSON parsing in Java – http://www.oracle.com/technetwork/articles/java/json-1973242.html and http://docs.oracle.com/javaee/7/api/javax/json/JsonReader.html.

Documentation for Fusion Middleware 12c (12.1.3)

Developing Extensible Applications for Oracle Web Services Manager –  http://docs.oracle.com/cd/E57014_01/owsm/extensibility/owsm-extensibility-create.htm#EXTGD153

Overriding Policy Configuration Properties – http://docs.oracle.com/cd/E57014_01/owsm/security/override-owsm-policy-config.htm#CACGHIFE

Managing Web Service Policies with Fusion Middleware Control – http://docs.oracle.com/cd/E57014_01/owsm/security/manage-owsm-policies.htm#OWSMS5573

XML Schema Reference for Predefined Assertions – http://docs.oracle.com/cd/E57014_01/owsm/security/owsm-assertion-schema.htm

Stepping Through Sample Custom Assertions – https://docs.oracle.com/middleware/1213/owsm/extensibility/owsm-extensibility-samples.htm#EXTGD162

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

The post Oracle SOA Suite 12c – Create, Deploy, Attach and Configure a Custom OWSM Policy – to report on service execution appeared first on AMIS Oracle and Java Blog.

Live Monitoring of SOA Suite Service Execution with Stream Explorer – leveraging Custom OWSM Policy and JMS

$
0
0
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

This article demonstrates how live monitoring of SOA Suite service execution can be implemented using a custom OWSM policy that reports to a JMS queue and with a simple Stream Explorer exploration that aggregates these JMS messages:

image

The ingredients are:

  • a SOA Suite 12c runtime environment
  • a Stream Explorer installation

and two files available with this article:

  • CustomPolicyAssertionArchive.jar (that contains the custom policy implementation)
  • AMIS_Custom_Policies.zip (that contains the policy definition)

and a JSON configuration of the policy binding.

Using the ingredients we will walk through the following stages and steps:

Stage 1:

  • Copy JAR file to the WLS_SOA_domain/lib directory (and restart the domain)
  • Import the ZIP file into the EM FMW Control (to define the new policy)
  • Attach the policy to a SOA Composite and configure the operations map property
  • Invoke the SOA Composite and check the SOA domain log file (to find service execution reports logged in the file)

Stage 2:

  • Configure JMS artifacts to provide the conduit for the service execution reports (JMS Server, Module, Connection Factory and Queue)
  • Update the configuration of the policy binding with the JMS destination
  • Invoke the SOA Composite and check the JMS Queue monitoring page in the WebLogic Administration Console (to find messages produced for web service calls)

Stage 3:

  • Run Stream Explorer and create a Stream on top of the JMS Queue
  • Create an Exploration on top of the Stream to report aggregated service execution metrics (per service and per operation over the last 30 minutes)
  • Invoke several operations on the SOA Composite (several times) and see how the StreamExplorer exploration is updated to provide the latest insight

This provides the foundation for a wide range of applications of the Service Execution Reporter policy along with Stream Explorer. In future articles, we will see the type of focused monitoring this foundation enables us to perform.

 

Stage 1 – Basic application of Service Execution Reporter policy

This previous article describes how the Service Execution Reporter policy is developed. The policy is deployed to a JAR file that you can download here: CustomPolicyAssertionArchive (extract it from the ZIP file). The configuration of the policy is laid down in a ZIP file that you can download here: AMIS_Custom_Policies.

The JAR file has to be copied to the WLS_SOA_domain/lib directory. Using the target information in the EM FMW Control – see next figure – I find out about the exact the file location for the WebLogic domain that hosts the SOA Suite:

image

the lib directory under this domain home is where the jar file should be moved.

Subsequently, the domain has to be restarted in order to make the contents of the jar file available in the SOA Suite run time.

Import the ZIP file into the EM FMW Control (to define the new policy)

Start EM FMW Control.

image

navigate to Web Logic Domain – soa_domain | Web Services | WSM Policies.

Click on Import

image

Import the by clicking on Import

image

and selecting the right zip file:

image

The report back:

image

and the policy is listed:

image

 

Attach the policy to a SOA Composite and configure the operations map property

Open the SOA Composite, such as the FlightService composite shown below. Click on the Service Binding to which the policy is to be attached:

image

Open the Policies tab:

image

Click on the button Attach/Detach to open the dialog where policies can be attached to the service binding.

image

Select the amis/monitoring policy. Click on Attach to bind this policy to the service binding.

Click on OK to confirm the policy attachment.

Click on Override Policy Configuration to set the property values that apply specifically to this policy attachment:

image

 

The properties that are defined in the policy configuration file – SOASuiteServiceExecutionReporterPolicyFile.xml – are listed and the current values are shown. These values can now be overridden for this attachment of the policy to the FlightService.

image

The full value of the operationsMap property in this case is:

{
    "getFlightDetailsRequest" : {
        "operation" : "getFlightDetails",
        "oneWay" : "false",
        "request" : {
            "doReport" : "true",
            "payload" : [
                {
                    "name" : "carrierCode",
                    "xpath" : "/soap:Envelope/soap:Body/flig:getFlightDetailsRequest/flig:FlightCode/com:CarrierCode",
                    "namespaces" : [
                        {
                            "prefix" : "soap",
                            "namespace" : "http://schemas.xmlsoap.org/soap/envelope/"
                        },
                        {
                            "prefix" : "flig",
                            "namespace" : "com.flyinghigh/operations/flightservice"
                        },
                        {
                            "prefix" : "com",
                            "namespace" : "com.flyinghigh/operations/common"
                        }
                    ]
                },
                {
                    "name" : "flightNumber",
                    "xpath" : "/soap:Envelope/soap:Body/flig:getFlightDetailsRequest/flig:FlightCode/com:FlightNumber",
                    "namespaces" : [
                        {
                            "prefix" : "soap",
                            "namespace" : "http://schemas.xmlsoap.org/soap/envelope/"
                        },
                        {
                            "prefix" : "flig",
                            "namespace" : "com.flyinghigh/operations/flightservice"
                        },
                        {
                            "prefix" : "com",
                            "namespace" : "com.flyinghigh/operations/common"
                        }
                    ]
                }
            ]
        },
        "response" : {
            "doReport" : "true",
            "payload" : [
                {
                    "name" : "flightStatus",
                    "xpath" : "/soap:Envelope/soap:Body/flig:getFlightDetailsResponse/flig:FlightStatus",
                    "namespaces" : [
                        {
                            "prefix" : "soap",
                            "namespace" : "http://schemas.xmlsoap.org/soap/envelope/"
                        },
                        {
                            "prefix" : "flig",
                            "namespace" : "com.flyinghigh/operations/flightservice"
                        },
                        {
                            "prefix" : "com",
                            "namespace" : "com.flyinghigh/operations/common"
                        }
                    ]
                }
            ]
        }
    },
    "retrievePassengerListForFlightRequest" : {
        "operation" : "retrievePassengerListForFlight",
        "oneWay" : "false",
        "request" : {
            "doReport" : "true",
            "payload" : [
                {
                    "name" : "carrierCode",
                    "xpath" : "/soap:Envelope/soap:Body/flig:retrievePassengerListForFlightRequest/flig:FlightCode/com:CarrierCode",
                    "namespaces" : [
                        {
                            "prefix" : "soap",
                            "namespace" : "http://schemas.xmlsoap.org/soap/envelope/"
                        },
                        {
                            "prefix" : "flig",
                            "namespace" : "com.flyinghigh/operations/flightservice"
                        },
                        {
                            "prefix" : "com",
                            "namespace" : "com.flyinghigh/operations/common"
                        }
                    ]
                }
            ]
        },
        "response" : {
            "doReport" : "true"
        }
    }
}

Obviously, you will have to provide the values that make sense for the services you want to the attach the policy to. Note: if you do not define the operationMap property for a particular policy binding, the service execution is reported. However, these reports obviously cannot report the operation name (only the message type) nor any values from the payload.

Click on Apply to confirm the property values.

At this point, the policy is primed for action for the FlightService.

Invoke the SOA Composite and check the SOA domain log file (to find service execution reports logged in the file)

By invoking the various FlightService operations, we can now see the policy in action.

image

The effect of this call is reported by the custom policy in the log-file:

image

A call to another operation results in a similar report:

image

in the log file:

image

The third operation – sendFlightStatusUpdate – is not configured at all in the operationsMap property. When this operation is invoked:

image

The report:

SNAGHTML1007f5a8

Stage 2 – Configuration of resources to route Service Execution Reports to JMS

The reports produced by the policy can be reported to a JMS destination in addition to the log file output. And we need that. So we first need to prepare a simple JMS Queue that we can next configure on the policy to have the JMS reporting going.

Open the WebLogic Administration Console. Open the Services | Messaging node in the Domain Structure Navigator. Create a new JMS Server:

image

Set the name. Then press Next. Select the managed server running the SOA Suite (the engine that runs the SOA Composite applications) as the target.

image

Press Finish.

image

Click on the Services | Messaging | JMS Modules node. Click on the New button to create a new JMS Module.

image

Set the name of the JMS module:

image

Click on Next.

Select the managed server running the SOA Suite as the target for the JMS Module:

image

and press Next.

image

Check the checkbox and press Finish.

image

Open the tab Subdeployments:

image

Click on New to create  a new subdeployment. Set the name:

image

And click on Next.

Select the JMS Server that was created earlier on as the target:

image

Click Finish:

image

Open the Configuration tab. Click on the new button to  create the Connection Factory:

image

Select the right radio button and click Next.

image

Set the name and the JNDI Name:

image

and click Next.

The target for the JMS Module is shown:

image

Click Finish. Create a new resource of type Queue:

image

Set the name and the JNDI Name:

image

Press Next.

Select the appropriate subdeployment and JMS Server (those that were created earlier):

image

Press Finish.

All four JMS artifacts are now created:

image

 

Update the configuration of the policy binding with the JMS destination

The policy was initially uploaded with a global configuration that includes the properties JMSDestination and JMSConnectionFactory set to empty strings. To configure the appropriate JMS artifact references, open the EM FMW Control and navigate to Web Logic Domain – soa_domain | Web Services | WSM Policies.

image

Locate the policy amis/monitoring. Click on Open link. Open the Assertion tab and click on Configuration.

image

Set the properties JMSDestination and JMSConnectionFactory  to “jms/ServiceExecutionReportingQueue” and “jms/ServiceExecutionReportingCF” respectively :

image

Click OK to apply these values.

 

Invoke the SOA Composite and check the JMS Queue monitoring page in the WebLogic Administration Console

From SOAP UI make one call to the service by the SOA composite that has the Service Execution Reporter attached.

image

Both the request en response message will pass through the policy and trigger both an entry in the log file as well as a message sent to the JMS queue. We can verify the latter in the WebLogic Admin Console by checking the Monitoring tab for the queue:

image

Drilling down provides a little more insight into the messages that were published to the queue:

image

image

Invoke the SOA Composite’s service from SoapUI a few more times and the message count on the Monitoring tab for the JMS queue will increase further.

Clearly we have established JMS publication of a MapMessage for each service execution of the FlightService (and any other service that has the ServiceExecutionReporter policy attached.

 

Stage 3 – Monitor Service Execution using Oracle Stream Explorer explorations

The final piece of today’s puzzle is the step from the JMS Queue with its MapMessags to the Stream Explorer exploration that provides a count of recent service executions.

Run Stream Explorer

image

and create a Stream on top of the JMS Queue. Click on Create New Item and Select Stream as the new Item Type to create.

Enter a name and a description and select the Stream’s source type as JMS:

image

Click Next.

Configure the JMS destination (the queue to use as the source) as shown next:

image

The URL is for the WebLogic managed server that hosts the JMS Queue; the admin username and password are used here to access the JMS Queue.

Click Next.

Specify the name for the ‘shape’ – the data structure in Stream Explorer to capture the events from the stream.

image

Select Manual Mapping and define the properties of the shape – corresponding with the properties written in the JMS Map – which are:

service, operation, ecid, stage, executionTimestamp – and whichever payload elements are configured for extraction in the operationsMap.

image

Press Create to create the Stream.

The wizard for creating the Exploration kicks in immediately after completing the Stream definition.

image

Specify name and description and optionally some tags.

image

Press Create. This takes you to the Exploration editor.

A lot is specified for the exploration:

  • The Summary to calculate is a count of the number of events – grouped by service and operation.
  • Filter only the events that have the stage set to request
  • Calculate the Summary over the last one hour and update the count every 10 seconds

image

Invoke several operations on the SOA Composite (several times) and see how the StreamExplorer exploration is updated to provide the latest insight:

 

image

Here we see how first (bottom two entries) some calls were made to the operation retrievePassengerListForFlight – the last two within 10 seconds of each other because an entry with COUNT_of-service equal to 2 is missing. Subsequently, up to 7 calls were made to the getFlightDetails operation – not interrupted by calls to other operations in the FlightService. Noe that calls 5 and 6 were close together – within 10 seconds of each other.

Let’s attach the policy to another SOA composite – just for kicks:

image

image

image

image

Then invoke an operation on the ConversionService composite:

image

followed by a few calls to the FlightService – and see the result in the Stream Explorer report:

image

 

It should hopefully be clear now that we have an way to observe and analyze service execution behavior using Stream Explorer and leveraging the output from the custom Service Execution Reporter policy.

image

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

The post Live Monitoring of SOA Suite Service Execution with Stream Explorer – leveraging Custom OWSM Policy and JMS appeared first on AMIS Oracle and Java Blog.


Use Oracle Stream Explorer and the Service Execution Reporter policy to analyze service behavior – find too-late-closing flights on Saibot Airport

$
0
0
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

This article shows how using the Service Execution Reporting policy – first introduced in this article: https://technology.amis.nl/2015/04/01/oracle-soa-suite-12c-create-deploy-attach-and-configure-a-custom-owsm-policy-to-report-on-service-execution/ – and the bridge created from the reporter through JMS to Stream Explorer – demonstrated in this article: https://technology.amis.nl/2015/04/06/live-monitoring-of-soa-suite-service-execution-with-stream-explorer-leveraging-custom-owsm-policy-and-jms/ – we can create a business monitor. The reports on service executions can be interpreted in a functional way to produce business insight.

In this article we will specifically monitor airplanes at the gate – an example inspired  by the Saibot Airport case in the Oracle SOA Suite 12c Handbook. Clearly, the time at the gate should be minimized. We will keep an eye on planes that remain at the gate for too long.

image

When a flight opens at the gate – the sendFlightStatusUpdate operation on the FlightService is invoked. Subsequently, as the flight starts boarding, has completed boarding and is closed (and departs), the same operation is invoked. The new status is reported to the service and routed onwards by the service to interested parties.

Using the Service Execution Reporter policy, we report calls to the sendFlightStatusUpdate operation and make sure that carrier, flight number and the new status are included in the report. In Stream Explorer, we create a Stream for consuming the service execution report messages from a JMS Queue. The Stream Explorer [data]shape contains properties for carrier, flight number and status. An exploration is based on the stream – filtering only on reports from the sendFlightStatusUpdate operation on the FlightService.

When this exploration is tested, we create a second exploration based on the missing event detection pattern. This exploration will detect cases where the report of a flight changing its status to open (at the gate, starting the departure procedure) is not followed quickly enough by a report of that same flight changing its status to closed. When this situation is detected, it is reported – and action can be taken.

We will see how we change the status of several flights to open in a short period of time. Then, for all but one of the flights, we change the status to closed. The Stream Explorer exploration will report the one flight for which the status was not updated [in time], proving that we can perform such business monitoring.

A video illustrating the end result achieved in this blog article is available from YouTube.

Configure Service Execution Reporter policy for the sendFlightStatusUpdate operation

We will assume here that the policy has been added to the SOA Suite runtime as is described in this article – by adding the JAR file and importing the policy description.

The policy needs to be attached to the FlightService and the configuration needs to be overridden to cater for the sendFlightStatusUpdate operation. This is done in the EM FMW Control. Select the FlightService SOA Composite. Click on the FlightService Web Service binding. Open the Policies tab. Attach the amis/monitoring policy. Click on the link to Override Policy Configuration, as shown in the next figure.

image

The Security Configuration Details popup appears. Here we can specify the values of the policy properties as they should be in the context of the FlightService. Make sure that the operationsMap property is set with the right configuration regarding the sendFlightStatusUpdateRequest message type and the associated sendFlightStatusUpdate operation.

image

Press Apply to save the changes.

Call the sendFlightStatusUpdate operation for example from SoapUI:

image

 

and verify whether the report is written to the log file as expected:

image

 

Apparently, the messages required to perform monitoring on flights that do not leave the gate soon enough are available on the JMS Queue. Let’s harvest and analyze them from Stream Explorer.

Create the Stream Explorer Stream and Exploration

Open Stream Explorer. Create a Stream for the JMS Queue to which the Service Execution Reporter publishes messages. Note: remove any existing streams on top of this queue to prevent the streams from competing for the queue’s messages.

image

The wizard for a new Stream opens.

Set the name and a description for the stream.

image

Then click Next.

Configure the JMS queue details:

image

And press Next.

Define the Shape (the data structure to capture the values from the MapMessages on the JMS Queue):

image

and define all properties – using the names of the properties written to the MapMessage:

image

Finally, click Create.

The wizard to create the Exploration appears. Define a name and a description:

image

Click on Next.

Define no special filters, aggregation or time constraints to just report all reports. Now make a few calls to the sendFlightStatusUpdate operation. Each call should produce a service execution report message that shows up in the exploration:

image

 

Create the Pattern Based Exploration to Detect Missing ‘flight closed’ Messages

The exploration we need now is one that is based on the Detect Missing Event pattern. The missing event in this case is the report of a status update to ‘closed’ for a flight (carrier plus number) that was reported as being ‘opened’ – within the specified time. In a normal airport, we would perhaps use 40 minutes as the maximum period. In this demo case, we will use 40 seconds as the cut off time.

First of all, we need to publish the exploration AllServiceExecutionReport – in order to use it as the source for our next exploration:

image

 

From this exploration we will siphon off the messages that relate flight status updates in a new exploration FlightStatusOpenAndClosedUpdateReports.

image

Configure filters to focus on messages from service  default/FlightService/FlightService and where operation equals sendFlightStatusUpdate and stage equals request.

image

Note: I would have wanted to add a filter on status open or close. However, Stream Explorer does not let me create such a filters at the present time.

Publish this exploration:

image

The challenge I have to address at this point is: identify cases where the status of a flight is updated to open and where there is not subsequent update of the status of that same flight to closed within 40 seconds. While there is no exact fit, this sounds very much like to the Detect Missing Event pattern that Stream Explorer supports. I will create an exploration based on that pattern to see how close I can some to implementing my requirement.

Now create another new Exploration – of type Pattern:

image

Configure the Exploration – set a name, select FlightStatusOpenAndClosedUpdateReports as the input stream. Select the fields businessAttribute1 , 2 and 3 – for carrier, flightnumber and status respectively – as the Tracking Fields and set the Heartbeat Interval to 40 seconds.

.

 

image

 

And at this point you probably realize that this is not entirely the correct pattern to detect. What we have specified here is that we want to get notified whenever it takes more than 40 seconds for a message with certain values for businessAttribute1, 2 and 3 to be followed by another message for the same values for the three business attributes. However, we want to raise the alarm only if there is not a message with status (businessAttribute3) closed within 40 seconds of a message with status open for a specific flight, identified by businessAttribute1 and 2. And this is a type of missing event detection that is one step too complex for Stream Explorer to handle. Its missing event detection pattern focuses on the simple case of a message with specified indicators that is not succeeded by a message with exactly the same set of indicators.

However, Stream Explorer brought us quite a long way. And it allows us to export the exploration – as an OEP application that can be imported into JDeveloper to be refined through normal OEP development. In JDeveloper, we can make a fairly small change that will turn the exploration into an OEP application that does exactly what we need it to do.

Export the Exploration:

image

Click on the Export link in the wizard page:

image

And save the file:

SNAGHTML14a2272b

Open JDeveloper. Create a new, empty application – for example of type Custom Application.

Click on File | Import:

image

and select the option OEP Bundle into new project:

SNAGHTML14a2fb99

Select the file exported from Stream Explorer earlier on:

SNAGHTML14a3d286

And the project is created from the JAR file:

image

Inspect the sources that were created by Stream Explorer. One processor for each exploration. The final one with the CQL logic for detecting missing events:

image

It is in this CQL query that we need to make some changes to achieve the functionality we desire. The CQL query is updated to detect specific situations where a flight status update event that reports the ‘open’ status is not followed – within 40 seconds – by a flight status update event that updates the flight to ‘closed’:

image

This rather small change is all it takes to take the Stream Explorer application and refine it to the point where it fulfills our needs.

The partition is defined by businessAttribute1 and 2 (carrier and flight number). The PATTERN is composed from event OPEN and event NOT_CLOSED. OPEN is defined as a flight status update with status is ‘open’. NOT_CLOSED is any message that does not indicate a ‘closed’ status for the same flight. Normally there would not be such a message. However, every 40 seconds, a timer event is added. This event satisfies the NOT_CLOSED condition. When the timer event comes sooner than the desired ‘closed’ status update, the pattern is satisfied and a result is produced.

In order to verify the effects of this change, I add a CSV Outbound Adapter to write the results to a file:

image

image

I then create a deployment profile for the project and deploy the OEP bundle to the OEP server – the same one that also runs Stream Explorer or a different one.

From SoapUI, I then send messages that set the flight status to open for a number of flights:

image

The file to which the results are written is almost empty:

image

I close a number of the flights, but not all of them:

image

One flight remains open. Will the OEP application detect the flight that was not closed within 40 seconds?

Of course it does:

image

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

The post Use Oracle Stream Explorer and the Service Execution Reporter policy to analyze service behavior – find too-late-closing flights on Saibot Airport appeared first on AMIS Oracle and Java Blog.

Oracle SOA Suite and WebLogic: Overview of key and keystore configuration

$
0
0

Keystores and the keys within can be used for security on the transport layer and application layer in Oracle SOA Suite and WebLogic Server. Keystores hold private keys (identity) but also public certificates (trust). This is important when WebLogic / SOA Suite acts as the server but also when it acts as the client. In this blog post I’ll explain the purpose of keystores, the different keystore types available and which configuration is relevant for which keystore purpose.

Why use keys and keystores?

The below image (from here) illustrates the TCP/IP model and how the different layers map to the OSI model. When in the below elaboration, I’m talking about the application and transport layers, I mean the TCP/IP model layers and more specifically for HTTP.

The two main reasons why you might want to employ keystores are that

  • you want to enable security measures on the transport layer
  • you want to enable security measures on the application layer

Almost all of the below mentioned methods/techniques require the use of keys and you can imagine the correct configuration of these keys within SOA Suite and WebLogic Server is very important. They determine which clients can be trusted, how services can be called and also how outgoing calls identity themselves.

You could think transport layer and application layer security are two completely separate things. Often they are not that separated though. The combination of transport layer and application layer security has some limitations and often the same products / components are used to configure both.

  • Double encryption is not allowed. See here. ‘U.S. government regulations prohibit double encryption’. Thus you are not allowed to do encryption on the transport layer and application layer at the same time. This does not mean you cannot do this though, but you might encounter some product restrictions since, you know, Oracle is a U.S. company.
  • Oracle Webservice Manager (OWSM) allows you to configure policies that perform checks if transport layer security is used (HTTPS in this case) and is also used to configure application level security. You see this more often that a single product is used to perform both transport layer and application layer security. For example also API gateway products such as Oracle API Platform Cloud Service.

Transport layer (TLS)

Cryptography is achieved by using keys from keystores. On the transport layer you can achieve

You can read more on TLS in SOA Suite here.

Application layer

On application level you can achieve similar feats (authentication, integrity, security, reliability), however often more fine grained such as for example on user level or on a specific part of a message instead of on host level or for the entire connection. Performance is usually not as good as with transport layer security because the checks which need to be performed, can require actual parsing of messages instead of securing the transport (HTTP) connection as a whole regardless of what passes through. The implementation depends on the application technologies used and is thus quite variable.

  • Authentication by using security tokens such as for example
    • SAML. SAML tokens can be used in WS-Security headers for SOAP and in plain HTTP headers for REST.
    • JSON Web Tokens (JWT) and OAuth are also examples of security tokens
    • Certificate tokens in different flavors can be used which directly use a key in the request to authenticate.
    • Digest authentication can also be considered. Using digest authentication, a username-password token is created which is send using WS-Security headers.
  • Security and reliability by using message protection. Message protection consists of measures to achieve message confidentiality and integrity. This can be achieved by
    • signing. XML Signature can be used for SOAP messages and is part of the WS Security standard. Signing can be used to achieve message integrity.
    • encrypting. Encrypting can be used to achieve confidentiality.

Types of keystores

There are two types of keystores in use in WebLogic Server / OPSS. JKS keystores and KSS keystores. To summarize the main differences see below table:

JKS

There are JKS keystores. These are Java keystores which are saved on the filesystem. JKS keystores can be edited by using the keytool command which is part of the JDK. There is no direct support for editing JKS keystores from WLST, WebLogic Console or Fusion Middleware Control. You can use WLST however to configure which JKS file to use. For example see here

connect('weblogic','Welcome01','t3://localhost:7001')
edit()
startEdit()
cd ('Servers/myserver/ServerMBean/myserver')

cmo.setKeyStores('CustomIdentityAndCustomTrust')
cmo.setCustomIdentityKeyStoreFileName('/path/keystores/Identity.jks')
cmo.setCustomIdentityKeyStorePassPhrase('passphrase')
cmo.setCustomIdentityKeyStoreType('JKS')
cmo.setCustomIdentityKeyStoreFileName('/path/keystores/Trust.jks')
cmo.setCustomTrustKeyStorePassPhrase('passphrase')
cmo.setCustomTrustKeyStoreType('JKS')

save()
activate()
disconnect()

Keys in JKS keystores can have passwords as can keystores themselves. If you use JKS keystores in OWSM policies, you are required to configure the key passwords in the credential store framework (CSF). These can be put in the map: oracle.wsm.security and can be called: keystore-csf-key, enc-csf-key, sign-csf-key. Read more here. In a clustered environment you should make sure all the nodes can access the configured keystores/keys by for example putting them on a shared storage.

KSS

OPSS also offers KeyStoreService (KSS) keystores. These are saved in a database in an OPSS schema which is created by executing the RCU (repository creation utility) during installation of the domain. KSS keystores are the default keystores to use since WebLogic Server 12.1.2 (and thus for SOA Suite since 12.1.3). KSS keystores can be configured to use policies to determine if access to keys is allowed or passwords. The OWSM does not support using a KSS keystore which is protected with a password (see here: ‘Password protected KSS keystores are not supported in this release’) thus for OWSM, the KSS keystore should be configured to use policy based access.

KSS keys cannot be configured to have a password and using keys from a KSS keystore in OWSM policies thus do not require you to configure credential store framework (CSF) passwords to access them. KSS keystores can be edited from Fusion Middleware Control, by using WLST scripts or even by using a REST API (here). You can for example import JKS files quite easily into a KSS store with WLST using something like:

connect('weblogic','Welcome01','t3://localhost:7001')
svc = getOpssService(name='KeyStoreService')
svc.importKeyStore(appStripe='mystripe', name='keystore2', password='password',aliases='myOrakey', keypasswords='keypassword1', type='JKS', permission=true, filepath='/tmp/file.jks')

Where and how are keystores / keys configured

As mentioned above, keys within keystores are used to achieve transport security and application security for various purposes. If we translate this to Oracle SOA Suite and WebLogic Server.

Transport layer

Incoming

  • Keys are used to achieve TLS connections between different components of the SOA Suite such as Admin Servers, Managed Servers, Node Managers. The keystore configuration for those can be done from the WebLogic Console for the servers and manually for the NodeManager. You can configure identity and trust this way and if the client needs to present a certificate of its own so the server can verify its identity. See for example here on how to configure this.
  • Keys are used to allow clients to connect to servers via a secure connection (in general, so not specific for communication between WebLogic Server components). This configuration can be done in the same place as above, with the only difference that no manual editing of files on the filesystem is required (since no NodeManager is relevant here).

Outgoing

Composites (BPEL, BPM)

Keys are be used to achieve TLS connections to different systems from the SOA Suite. The SOA Suite acts as the client here. The configuration of identity keystore can be done from Fusion Middleware Control by setting the KeystoreLocation MBean. See the below image. Credential store entries need to be added to store the identity keystore password and key password. Storing the key password is not required if it is the same as the keystore password. The credential keys to create for this are: SOA/KeystorePassword and SOA/KeyPassword with the user being the same as the keyalias from the keystore to use). In addition components also need to be configured to use a key to establish identity. In the composite.xml a property oracle.soa.two.way.ssl.enabled can be used to enable outgoing two-way-ssl from a composite.

Setting SOA client identity store for 2-way SSL

 

Specifying the SOA client identity keystore and key password in the credential store

You can only specify one keystore/key for all two-way-SSL outgoing composite connections. This is not a setting per process. See here.

Service Bus

The Service Bus configuration for outgoing SSL connections is quite different from the composite configuration. The following blog here describes the locations where to configure the keystores and keys nicely. In WebLogic Server console, you create a PKICredentialMapper which refers to the keystore and also contains the keystore password configuration. From the Service Bus project, a ServiceKeyProvider can be configured which uses the PKICredentialMapper and contains the configuration for the key and key password to use. The ServiceKeyProvider configuration needs to be done from the Service Bus console since JDeveloper can not resolve the credential mapper.

To summarize the above:

Overwriting keystore configuration with JVM parameters

You can override the keystores used with JVM system parameters such as javax.net.ssl.trustStore, javax.net.ssl.trustStoreType, javax.net.ssl.trustStorePassword, javax.net.ssl.keyStore, javax.net.ssl.keyStoreType, javax.net.ssl.keyStorePassword in for example the setDomainEnv script. These will override the WebLogic Server configuration and not the OWSM configuration (application layer security described below). Thus if you specify for example an alternative truststore by using the command-line, this will not influence HTTP connections going from SOA Suite to other systems. Even when message protection (using WS-Security) has been enabled, which uses keys and check trust. It will influence HTTPS connections though. For more detail on the above see here.

Application layer

Keys can be used by OWSM policies to for example achieve message protection on the application layer. This configuration can be done from Fusion Middleware Control.

The OWSM run time does not use the WebLogic Server keystore that is configured using the WebLogic Server Administration Console and used for SSL. The keystore which OWSM uses by default is kss://owsm/keystore since 12.1.2 and can be configured from the OWSM Domain configuration. If you do not use the default keystore name for the KSS keystore, you must grant permission to the wsm-agent-core.jar in OPSS.

OWSM keystore contents and management from FMW Control

OWSM keystore domain config

In order for OWSM to use JKS keystores/keys, credential store framework (CSF) entries need to be created which contain the keystore and key passwords. The OWSM policy configuration determines the key alias to use. For KSS keystores/keys no CSF passwords to access keystores/keys are required since OWSM does not support KSS keystores with password and KSS does not provide a feature to put a password on keys. In this case the OWSM policy parameters such as keystore.sig.csf.key refer to a key alias directly instead of a CSF entry which has the key alias defined as the username.

Identity for outgoing connections (application policy level, e.g. signing and encryption keys) is established by using OWSM policy configuration. Trust for SAML/JWT (secure token service and client) can be configured from the OWSM Domain configuration.

Finally

This is only the tip of the iceberg

There is a lot to tell in the area of security. Zooming in on transport and application layer security, there is also a wide range of options and do’s and don’ts. I have not talked about the different choices you can make when configuring application or transport layer security. The focus of this blog post has been to provide an overview of keystore configuration/usage and thus I have not provided much detail. If you want to learn more on how to achieve good security on your transport layer, read here. To configure 2-way SSL using TLS 1.2 on WebLogic / SOA Suite, read here. Application level security is a different story altogether and can be split up in a wide range of possible implementation choices.

Different layers in the TCP/IP model

If you want to achieve solid security, you should look at all layers of the TCP/IP model and not just at the transport and application layer. Thus it also helps if you use different security zones, divide your network so your development environment cannot by accident access your production environment or the other way around.

Final thoughts on keystore/key configuration in WebLogic/SOA Suite

When diving into the subject, I realized using and configuring keys and keystores can be quite complex. The reason for this is that it appears that for every purpose of a key/keystore, configuration in a different location is required. It would be nice if that was it, however sometimes configuration overlaps such as for example the configuration of the truststore used by WebLogic Server which is also used by SOA Suite. This feels inconsistent since for outgoing calls, composites and service bus use entirely different configuration. It would be nice if it could be made a bit more consistent and as a result simpler.

The post Oracle SOA Suite and WebLogic: Overview of key and keystore configuration appeared first on AMIS Oracle and Java Blog.

Securing Oracle Service Bus REST services with OAuth2 client credentials flow (without using additional products)

$
0
0

OAuth2 is a popular authentication framework. As a service provider it is thus common to provide support for OAuth2. How can you do this on a plain WebLogic Server / Service Bus without having to install additional products (and possibly have to pay for licenses)? If you just want to implement and test the code (what), see this installation manual. If you want to know more details about the implementation (how) and choices made (why), read on!

Introduction

OAuth2 client credentials flow

OAuth2 supports different flows. One of the easiest to use is the client credentials flow. It is recommended to use this flow when the party requiring access can securely store credentials. This is usually the case when there is server to server communication (or SaaS to SaaS).

The OAuth2 client credentials flow consists of an interaction pattern between 3 actors which all have their own roll in the flow.

  • The client. This can be anything which supports the OAuth2 standard. For testing I’ve used Postman
  • The OAuth2 authorization server. In this example I’ve created a custom JAX-RS service which generates and returns JWT tokens based on the authenticated user.
  • A protected service. In this example I’ll use an Oracle Service Bus REST service. The protection consists of validating the token (authentication using standard OWSM policies) and providing role based access (authorization).

When using OAuth2, the authorization server returns a JSON message containing (among other things) a JWT (JSON Web Token).

In our case the client authenticates using basic authentication to a JAX-RS servlet. This uses the HTTP header Authorization which contains ‘Basic’ followed by Base64 encoded username:password. Of course Base64 encoded strings can be decoded easily (e.g. by using sites like these) so never use this over plain HTTP!

When this token is obtained, it can be used in the Authorization HTTP header using the Bearer keyword. A service which needs to be protected can be configured with the following standard OWSM policies for authentication: oracle/http_jwt_token_service_policy and oracle/http_jwt_token_over_ssl_service_policy and a custom policy for role based access / authorization.

JWT

JSON Web Tokens (JWT) can look something like:

View the code on Gist.

This is not very helpful at first sight. When we look a little bit closer, we notice it consists of 3 parts separated by a ‘.’ character. These are the header, body and signature of the token. The first 2 parts can be Base64 decoded.

Header

The header typically consists of 2 parts (see here for an overview of fields and their meaning). The type of token and the hashing algorithm. In this case the header is

View the code on Gist.

kid refers to the key id. In this case it provides a hint to the resource server on which key alias to use in its key store to validate the signature.

Body

The JWT body contains so-called claims. In this case the body is

View the code on Gist.

The subject is the subject for which the token was issued. www.oracle.com is the issuer of the token. iat indicates an epoch at which the token was issued and exp indicates until when the token is valid. Tokens are valid only for a limited duration. www.oracle.com is an issuer which is accepted by default so no additional configuration was required.

Signature

The signature contains an encrypted hash of the header/body of the token. If those are altered, the signature validation will fail. To encrypt the signature, a key-pair is used. Tokens are signed using a public/private key pair.

Challenges

Implementing the OAuth2 client credentials flow using only a WebLogic server and OWSM can be challenging. Why?

  • Authentication server. Bare WebLogic + Service Bus do not contain an authentication server which can provide JWT tokens.
  • Resource Server. Authentication of tokens. The predefined OWSM policies which provide authentication based on JWT tokens (oracle/http_jwt_token_service_policy and oracle/http_jwt_token_over_ssl_service_policy) are picky to what tokens they accept.
  • Resource Server. Authorization of tokens. OWSM provides a predefined policy to do role based access to resources: oracle/binding_permission_authorization_policy. This policy works for SOAP and REST composites and Service Bus SOAP services, but not for Service Bus REST services.

Custom components

How did I solve these challenges? I created two custom components;

  • Create a simple authentication server to provide tokens which conform to what the predefined OWSM policies expect. By increasing the OWSM logging and checking for errors when sending in tokens, it becomes clear which fields are expected.
  • Create a custom OWSM policy to provide role based access to Service Bus REST resources

Authentication server

The authentication server has several tasks:

  • authenticate the user (client credentials)
    • using the WebLogic security realm
  • validate the client credentials request
    • using Apache HTTP components
  • obtain a public and private key for signing
    • from the OPSS KeyStoreService (KSS)
  • generate a token and sign it

Authentication

User authentication on WebLogic Server of servlets consists of 2 configuration files.

A web.xml. This file indicates

  • which resources are protected
  • how they are protected (authentication method, TLS or not)
  • who can access the resources (security role)

The weblogic.xml indicates how the security roles map to WebLogic Server roles. In this case any user in the WebLogic security realm group tokenusers (which can be in an external authentication provider such as for example an AD or other LDAP) can access the token service to obtain tokens.

Validate the credentials request

From Postman you can do a request to the token service to obtain a token. This can also be used if the response of the token service conforms to the OAuth2 standard.

By default certificates are checked. With self-signed certificates / development environments, those checks (such as host name verification) might fail. You can disable the certificate checks in the Postman settings screen.

Also Postman has a console available which allows you to inspect requests and responses in more detail. The request looked like

Thus this is what needed to be validated; an HTTP POST request with a body containing application/x-www-form-urlencoded grant_type=client_credentials. I’ve used the Apache HTTP components org.apache.http.client.utils.URLEncodedUtils class for this.

After deployment I of course needed to test the token service. Postman worked great for this but I could also have used Curl commands like:

View the code on Gist.

Accessing the OPSS keystore

Oracle WebLogic Server provides Oracle Platform Security Services.

OPSS provides secure storage of credentials and keys. A policy store can be configured to allow secure access to these resources. This policy store can be file based, LDAP based and database based. You can look at your jps-config.xml file to see which is in use in your case;

You can also look this up from the EM;

In this case the file based policy store system-jazn-data.xml is used. Presence of the file on the filesystem does not mean it is actually used! If there are multiple policy stores defined, for example a file based and an LDAP based, the last one appears to be used.

The policy store can be edited from the EM

You can create a new permission:


Codebase: file:${domain.home}/servers/${weblogic.Name}/tmp/_WL_user/oauth2/-
Permission class: oracle.security.jps.service.keystore.KeyStoreAccessPermission
Resource name: stripeName=owsm,keystoreName=keystore,alias=*
Actions: read

The codebase indicates the location of the deployment of the authentication server (Java WAR) on WebLogic Server.

Or when file-based, you can edit the (usually system-jazn-data.xml) file directly

In this case add:


&lt;grant&gt;
&lt;grantee&gt;
&lt;codesource&gt;
&lt;url&gt;file:${domain.home}/servers/${weblogic.Name}/tmp/_WL_user/oauth2/-&lt;/url&gt;
&lt;/codesource&gt;
&lt;/grantee&gt;
&lt;permissions&gt;
&lt;permission&gt;
&lt;class&gt;oracle.security.jps.service.keystore.KeyStoreAccessPermission&lt;/class&gt;
&lt;name&gt;stripeName=owsm,keystoreName=keystore,alias=*&lt;/name&gt;
&lt;actions&gt;*&lt;/actions&gt;
&lt;/permission&gt;
&lt;/permissions&gt;
&lt;/grant&gt;

At the location shown below

Now if you create a stripe owsm with a policy based keystore called keystore, the authentication server is allowed to access it!

 

The name of the stripe and name of the keystore are the default names which are used by the predefined OWSM policies. Thus when using these, you do not need to change any additional configuration (WSM domain config, policy config). OWSM only supports policy based KSS keystores. When using JKS keystores, you need to define credentials in the credential store framework and update policy configuration to point to the credential store entries for the keystore password, key alias and key password. The provided code created for accessing the keystore / keypair is currently KSS based. Inside the keystore you can import or generate a keypair. The current Java code of the authentication server expects a keypair oauth2keypair to be present in the keystore.

Accessing the keystore and key from Java

I defined a property file with some parameters. The file contained (among some other things relevant for token generation):


keystorestripe=owsm
keystorename=keystore
keyalias=oauth2keypair

Accessing the keystore can be done as is shown below.

View the code on Gist.

When you have the keystore, accessing keys is easy

View the code on Gist.

(my key didn’t have a password but this still worked)

Generating the JWT token

After obtaining the keypair at the keyalias, the JWT token libraries required instances of RSAPrivateKey and RSAPublicKey. That could be done as is shown below

View the code on Gist.

In order to sign the token, an RSAKey instance was required. I could create this from the public and private key using a RSAKey.Builder method.

View the code on Gist.

Using the RSAKey, I could create a Signer

View the code on Gist.

Preparations were done! Now only the header and body of the token. These were quite easy with the provided builder.

Claims:

View the code on Gist.

Generate and sign the token:

View the code on Gist.

Returning an OAuth2 JSON message could be done with

View the code on Gist.

Role based authorization policy

The predefined OWSM policies oracle/http_jwt_token_service_policy and oracle/http_jwt_token_over_ssl_service_policy create a SecurityContext which is available from the $inbound/ctx:security/ctx:transportClient inside Service Bus. Thus you do not need a custom identity asserter for this!

However, the policy does not allow you to configure role based access and the predefined policy oracle/binding_permission_authorization_policy does not work for Service Bus REST services. Thus we need a custom policy in order to achieve this. Luckily this policy can use the previously set SecurityContext to obtain principles to validate.

Challenges

Provide the correct capabilities to the policy definition was a challenge. The policy should work for Service Bus REST services. Predefined policies provide examples, however they could not be exported from the WSM Policies screen. I did ‘Create like’ a predefined policy which provided the correct capabilities and then copied those capability definitions to my custom policy definition file. Good to know: some capabilities required the text ‘rest’ to be part of the policy name.

Also I encountered a bug in 12.2.1.2 which is fixed with the following patch: Patch 24669800: Unable to configure Custom OWSM policy for OSB REST Services. In 12.2.1.3 there were no issues.

An OWSM policy consists of two deployments

A JAR file

  • This JAR contains the Java code of the policy. The Java code uses the parameters defined in the file below.
  • A policy-config.xml file. This file indicates which class is implementing the policy. Important part of this file is the reference to restUserAssertion. This maps to an entry in the file below

A policy description ZIP file

  • This contains a policy description file.

The description ZIP file contains a single XML file which answers questions like;

  • Which parameters can be set for the policy?
  • Of which type are the parameters?
  • What are the default values of the parameters?
  • Is it an authentication or authorization policy?
  • Which bindings are supported by the policy?

The policy description file contains an element which maps to the entry in the policy-config.xml file. Also the ZIP file has a structure which is in line with the name and Id of the policy. It is like;

Thus the name of the policy is CUSTOM/rest_user_assertion_policy
This name is also part of the contents of the rest_user_assertion_policy file. You can also see there is again a reference to the implementation class and the restUserAssertion element which is in the policy-config.xml file is also there. The capabilities of the policy are mentioned in the restUserAssertion attributes.

Implementation

As indicated, for more detail see the installation manual here. The installation consists of:

  • Create a stripe, keystore, keypair to use for JWT signature encrytpion and validation
  • Add a system policy so the token service can access the keystore
  • Create a group tokenusers which can access the token service to obtain tokens
  • Deploy the token service
  • Apply Patch 24669800 if you’re not on 12.2.1.3
  • Copy the custom OWSM policy JAR file to the domain lib folder
  • Import the policy description

If you have done the required preparations, adding OAuth2 protection to Service Bus REST services is as easy as adding 2 policies to the service and indicating which principles (can be users or groups, comma separated list) are allowed to access the service.

Finally

As mentioned before, the installation manual and code can be found here. Of course this solution does not provide all the capabilities of a product like API Platform Cloud Service, OAM, OES. Usually you don’t need all those capabilities and complexity and just a simple token service / policy providing the OAuth2 credentials flow is enough. In such cases you can consider this alternative. Mind that the entire service is protected by the policy and not specific resources. That would require extending the custom OWSM policy with this functionality. If for example someone tries to login to the token service with basic authentication and uses a wrong password for the user weblogic, it may be locked. Because of this and other resources which are available by default on the WebLogic server / Service Bus, you’ll require some extra protection when exposing this to the internet such as a firewall, IP whitelisting, SSL offloading, etc.

 

The post Securing Oracle Service Bus REST services with OAuth2 client credentials flow (without using additional products) appeared first on AMIS Oracle and Java Blog.

Viewing all 11 articles
Browse latest View live