FIM 2010 | Concurrent Workflow execution and “data smuggling”

Recently I’ve got presented with a challenge to solve; produced solution was a little but controversial; hence I’ve decided to share my thoughts in a blog entry

Allow me to provide a scenario to illustrate the dilemma:

          User is attempting to create a group object in FIM 2010 portal.

          Group creation process consists of several approval processes and notifications

          Multiple concurrent workflow instances could be initiated upon submission of request

          In the end the notification(s) about success or failure to create a group is sent to the requestor

The key phrase here is "multiple concurrent workflows"

Follow this logic with me for a minute

1.  Group request is submitted

2.  Workflow A is kicked-off to execute following activities:

          Initiate manual approval process (if needed)

          Verify group attributes for validity (future group size, scope, etc. verifications)

          Sends approval/rejecting notification back to the requestor

 

2.  Workflow B kicked-off to execute following:

          Verify the uniqueness of the sAMAccountName against 3rd party service which provides us with availability status as well as capability to reserve an alias/name for an object (ensuring alias uniqueness thought-out entire enterprise)

          Send approval/rejecting notification back to the requestor

 

Note that Workflow A and Workflow B are executed concurrently without awareness of each other existence

I would like you to forget about my private instance of "Alias Reservation" web-service. We can be talking about ANY other "foreign" system you’ll have to communicate with from inside of FIM 2010 code. It can be anything from XML file creation with some data stored in it for something external to read to SQL calls to AD queries. I am sure you’ll have some reasoning to do such thing.

Assume that all activities are executed during "Authorization" phase.  They are executed before actual group object creation in the FIM application.

Should workflow A succeed AND workflow B succeed group will be created; however should one of those workflows fail group creation will not happen. Sounds good so far…

Gotcha comes when one of the workflows resulting in "deny" and another in "allow", which is very much plausible scenario. Let say that requested alias is not available or not approved by internal policy, however manual approval process succeeded. In this case "Allowing" workflow will send a "Success" message to a requestor upon its own completion and "Denying" workflow will send a "Failure" message to the same requestor upon its own completion. Moreover by using "canned" notification activity we cannot sufficiently "elaborate" on exact cause of failure. As you can imagine this is rather confusing user experience;
You’ve clicked submit, one of the workflows sends you "success" message and few moments/hours/days later another workflow sends you "deny" message. What you going to do? You’ll probably call the help desk and will probably demand an explanation.

Another side-effect of this approval by one stream and denial by another stream is the fact that alias for the group could be wrongfully reserved. Let say that alias was available and workflow that executed alias verification and reservation activities is succeeded, meaning that it went ahead and reserved the alias with our 3rd party web-service. At that point our workflow object is gone. After awhile in another "stream" (with manual approval) we are receiving "deny" for whichever reason. So what is going to happen with the reserved alias? It’s going to remain being reserved. Next time user will try to re-create group with the same name (presumably resolving the problem of denial by approver at the first round) he/she will receive "alias not available" message.

Once again: forget about my private instance of "Alias Reservation" web-service. We could create an XML file on some file share for external system to read or write something to SQL server to trigger some auto-magical transformation of data. The idea is that we’ve called to outside from internal incomplete loop.

The easiest way of fixing this is to merge name validation/reservation and approval activities into single workflow. That will solve possible pong-pong of conflicting notification since they will be executed consecutively and NOT concurrently. However when this is not possible and we have to deal with possibility of several independent workflow threads we will face a challenge of figuring out how to avoid this mis-communication problem as well as false-positive execution/data-delivery to some external system.

Since we’ve decided not to merge authorization workflows (to create one consecutive action after action flow) our next option is to re-think timing of execution of each activity. We need to ensure that certain events are happening at fixed point in time.

As you might know all Authentication workflows will complete their execution before Authorization workflows initialized and all Authorization workflows are executed and completed before all Action workflows. So to insure that notification happens AFTER ALL authorization workflows are completed we have to move notification and actual execution of alias-reservation into the Action phase of the request.

This presents us with another set of tasks:

          By the time request reaches Action phase requested object is actually physically created in the application store of the FIM. So should we detect that object cannot be created for one or another reason we’ll have to ensure that we will delete it before request reached the end of the "pipe"

          We have to store some execution/verification results from Authentication phase into the request object itself, so we can access it during Action phase

First one is relatively easy. FIM product team provided very handy "deleteResourceAction" activity that you can use. It is simple to implement and I’ll leave pleasure of discovering how it works entirely to you.

In the other hand storage of temporal variables between Authentication and Action phases are relatively interesting subject to mention here.

So we have to "tattoo" values into the CurrentRequest object is that we can access them later when they are needed.

Follow me:

          User requests to rename group from ABC to XYZ

          In Authentication phase we can verify that XYZ is available and that all manual approvals are executed successfully

          Since Authorization phase completed successfully. FIM will change the object name to "XYZ" and pass the "currentRequest" object to the next phase: Action

          Since we are bound to "release" previously used alias "ABC", so it can be re-used by others, we have to know this value. However, as you already know object in the app store was renamed to "XYZ" and value of "ABC" was discarded.

Answer is to attempt to store value of "ABC" into the "current request" object while we are in the Authorization phase and still having access to that data. Easier said than done…

My first instinct was to use currentRequest.AddParameter() method to augment the current request with new parameter to hold the variable I wanted to store. That didn’t exactly work — app store thrown an exception and failed or me. After several and several attempts (that would be much much more than one) I’ve discovered that I simply cannot augment the current request in this phase by adding new CreateRequestParameter or UpdateRequestParameter object to my currentRequest.

Technically we cannot alter "currentRequest" object while it is being executed. Hence is the aforementioned controversial status of this solution.

So I’ve tried other way of smuggling data between Authorization and Action phases of the execution.

The answer is: Microsoft.ResourceManagement.WebServices.WSResourceManagement.ResourcePropertyInfo object

I was able to modify this object and store my data in the property info. Here is the method that you might want to use if you are in the same path:

private void AddResourceProperty(string propertyName, string value)

{

ResourcePropertyInfo resourcePropertyInfo = new ResourcePropertyInfo(value, "stringType", false, false);

resourcePropertyInfo.MaxOccurs = 1;

resourcePropertyInfo.MinOccurs = 0;

this.CurrentRequest.ResourceProperties.Add(propertyName, resourcePropertyInfo);

}

It is a little "creative" and perhaps strange to use property description to store data, but, hey! it works! So for all future data smugglers between Authorization and Action workflow in FIM 2010 there is your hidden compartment to stuff your booty.

Conclusion:

I the MIIS/ILM timeframe I had one rule I would not break "No external calls from inside of the code". In FIM timeframe rules are changed. FIM Application store is not state based as Sync Engine was and still it. Making calls to other systems from within a transaction will be reality in many cases. So for MIIS/ILM programmer it could be hard to adjust to this mentality. And for novice FIM 2010 programmer (as all of us are at this point) it is important to remember transaction nature of the Application Store and possibility of multiple threads and paths for a single request during execution. Calls to external systems must be thought-through very well to avoid unexpected loopholes.

By the way, I am utterly amazed that you are still reading this article and not fallen asleep.

Happy coding!

 

Advertisements
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: