Archive for September, 2009

Finding DLLs dependencies for custom activities of FIM 2010 RC1

It is almost that time for brand new RC1 of FIM2010 to be released. I am sure that most of the people who have coded some custom activities for RC0 will want to check-out new release of libraries to recompile/re-test your activities.
It probably was a little while since you’ve looked into your "%drive%\Program Files\Microsoft Forefront Identity Manager\2010\Portal" folder to get your dependency DLLs.
After you’ll install RC1 you’ll see significantly less files in that folder. Where all those DLLs are?! Answer it rather simple: all your dependency dlls that you’ll need to reference for your custom activities are packaged into .wsp containers.
So how to extract those .WSP files?
a) Copy MicrosoftILMPortalCommonDlls.wsp to MicrosoftILMPortalCommonDlls.cab
b) Now you should be able to open MicrosoftILMPortalCommonDlls.cab with basic Windows explorer and find all your DLLs there. Simply copy them out.
Happy coding!

Brief history of Live@edu management agent and how you can upgrade form one version to another

It will seem like a random entry into this blog, but it really is not. I was running with Microsoft’s Live@edu program for a very long while (and still actively working with it, in more of a partner / project management / role);  so I have fairly large number of ex-clients who are sporadically dropping me a note with a question; here is one of the question that I’ve got recently, perhaps my answer will shed some light on the brief technical history of live@edu program and it’s development

So here was the question:

When MAv3 first came out, the documentation provided only gave instructions for a new install, not an upgrade.  Do you have a set of instructions handy that you could share with me so that I could perform the upgrade in place?

And here was my answer:

First of all I have to denote that internally (to Live@edu program it is) most of the public milestones are marked by releases of MIIS/ILM management agents. Currently (the end of 2009) we are having an array of management agents released into the wild. Reason for counting milestones by MIIS/ILM management agent is rather simple. Once the back-end changes/enhances new management agent for automated provisioning of accounts need to be updated to use those new features. Hence the de facto naming convention:  MAv1, MAv2 and MAv3… I have heard people internally pronouncing it as "Ma-av One", "Ma-av Two" instead of "Em Ay One",  "Em Ay Two" ect ;

Well, quick answer to this question is "you simply can’t install new software on your side and call it done"; at least when we are talking about upgrade form MAv1/v2 to MAv3. Allow me to elaborate:

In the timeframe of "Hotmail" MAv1 and MAv2 communication between client and back-end server infrastructure was implemented in very direct fashion. Management agent would make two separate calls to Microsoft’s back end systems; firstly it would call to Windows LiveID servers and reserve/create the Windows LiveID for future mailbox. Upon (successfully) completion of that call another call to another web-service is initiated, this time it is Hotmail servers. Management agent provided Hotmail with newly created Live ID and initiates the creation of mailbox itself.

Sounds simple enough and reasonable logical; however, Hotmail and Live ID are two very deferent entities. Live ID is global identity provider / authentication system that serves hundred if not thousands services like Hotmail, Messenger, Spates, Sky Drive and now is open for anybody to consume the LiveID as an identity. The authentication methods and APIs exposed to external clients are world (or at least a blog post) in its own. So allowing external connection to back-end Hotmail takes an act of god, since it is a tap into the core of the Microsoft’s publicly available and highly exposed system. Going though legal processes and technical security hoops was a trip. You can see how why there was an static IP requirement and security certificate installation, permissioning, ect. On top of that it was taking physical time for Live@Edu team to actually negotiate on client’s behalf establishment of this type of connection to LiveID back-end.

After LiveID connection part of the equation was solved, Live@edu guys were negotiating a hole in the Hotmail defenses as well. There was no "blanket" agreement for everyone on live@edu side and Hotmail side, so all settings were processed manually during each client enrolment.

As you can see this process worked fine for a long while and was "technically correct" solution. Make right calls in the right order and voila – you have managed LiveID belonging to educational institution. However for Live@edu to scale this was less than acceptable. So there comes MAv3 timeframe:

Live@edu sponsored (more or less) a development that now called "Windows Live Admin Center" (Bing it here) It’s previous name was "Custom Domains").  In short Live@edu abstracted all back-end calls to LiveID and Hotmail via single API of "Admin Center". Admin Center got lots of documented (and rather simple to use) APIs that allowed user to make a programmatic call to a single web-service, create new LiveID and mailbox with a single call and not worry about any other details. Admin Center would create a LiveID, Hotmail mailbox and use blanket agreement between live@edu and those services to perform such operation. So no wait for Hotmail team, no wait for Live ID team, no need for IP registration, no need for proxies, everything was abstracted and accessible without dependencies of individual contracted between each Live@edu client and each back-end system in use. Admin Center is old news, we are talking like 2007 timeframe, but it is alive and kicking, you can take a look at those APIs here on MSDN.

At this time (September/October 2009) Live@edu moved-on way beyond the MAv3 with Admin Center APIs. Current hot pancake is "Outlook Live".  Outlook Live is a hosted Exchange 14 (Exchange 2010) solution. It is full-fledged and fully operational Exchange provided for students absolutely free. It comes with browser based OWA and can be hooked to fat Outlook and to whatever client you want. Even though I know the answer for the question "why this is free", it still boggles my mind. Not only you get full exchange with ever-growing mailbox quotas for free, but you also get all the admin management benefits with remote PowerShell and via web front-end.

So Outlook Live comes with its own Management Agent. It called ELMA (Exchange Labs Management Agent) [I’ve actually "fathered" this name for the product and it stuck! LOL] My little claim to fame!

By contrast to Hotmail MAs, Exchange team rejected the notion of connection to LiveID or to Admin Center from the ELMA. Outlook Live provided users with very rich interface via remote Power Shell. So all clients that are having Outlook Live as an email platform using ELMA which communicates directly with back-end Exchange farm (not Admin Center or LiveID web services or anything else);

In the last couple of years ELMA underwent several releases. Most current version (ELMA 4.0 which is in beta as we speak) shipped as part of a GalSync solution (complete unattendant synchronization between on-premises data-source(s) and hosted Exchange). That’s a whole separate topic of conversation or entire dedicated blog site.

Anyhow current ELMA is pure Power Shell commandlets based solution; ILM calls ELMA and it executes commandlets on remote server, which, in turn,  doing all the work on the back-end (call Admin Center, Live ID, Exchange back-end farm, Active Directory back-end or whatever they might need to call)

Returning back to your "conversion steps" question… as you can see it is a little tricky to simply re-install the Management Agent on your end and call it quits.

So why is the conversion process from "legacy" MAv1/v2 systems not straight-forward?  The most obvious reasons are:

a) "Admin Center" stores a cash of all users during creation of every account. So whenever account was created outside of those APIs those accounts will have to be ported on the back-end; which could be performed whenever you ask for it, but it is not something you can do from your end.

b) Service agreements between your institution and Hotmail/Live ID are already established. Those agreements will have to be broken and re-established via Admin Center in case of Hotmail offer or the Exchange in case of Outlook Live.

Both of those operations take time and have to be coordinated with all services as well as with your institution. I know for certain that it is more than possible to do, but you need to ask for it, and need to realize what it would take for your institution to migrate form legacy hotmail to Admin Center (MAv3) or to Outlook Live (Exchange 2010/ExchangeLabs/Exchange 14)

So now you know everything… and I am wondering whether I will get a call from somebody with request to shut-up. LOL

FIM 2010 | Concurrent Workflow execution and “data smuggling”

Recently I’ve got presented with a challenge to solve; produced solution was a little but controversial; hence I’ve decided to share my thoughts in a blog entry

Allow me to provide a scenario to illustrate the dilemma:

          User is attempting to create a group object in FIM 2010 portal.

          Group creation process consists of several approval processes and notifications

          Multiple concurrent workflow instances could be initiated upon submission of request

          In the end the notification(s) about success or failure to create a group is sent to the requestor

The key phrase here is "multiple concurrent workflows"

Follow this logic with me for a minute

1.  Group request is submitted

2.  Workflow A is kicked-off to execute following activities:

          Initiate manual approval process (if needed)

          Verify group attributes for validity (future group size, scope, etc. verifications)

          Sends approval/rejecting notification back to the requestor

 

2.  Workflow B kicked-off to execute following:

          Verify the uniqueness of the sAMAccountName against 3rd party service which provides us with availability status as well as capability to reserve an alias/name for an object (ensuring alias uniqueness thought-out entire enterprise)

          Send approval/rejecting notification back to the requestor

 

Note that Workflow A and Workflow B are executed concurrently without awareness of each other existence

I would like you to forget about my private instance of "Alias Reservation" web-service. We can be talking about ANY other "foreign" system you’ll have to communicate with from inside of FIM 2010 code. It can be anything from XML file creation with some data stored in it for something external to read to SQL calls to AD queries. I am sure you’ll have some reasoning to do such thing.

Assume that all activities are executed during "Authorization" phase.  They are executed before actual group object creation in the FIM application.

Should workflow A succeed AND workflow B succeed group will be created; however should one of those workflows fail group creation will not happen. Sounds good so far…

Gotcha comes when one of the workflows resulting in "deny" and another in "allow", which is very much plausible scenario. Let say that requested alias is not available or not approved by internal policy, however manual approval process succeeded. In this case "Allowing" workflow will send a "Success" message to a requestor upon its own completion and "Denying" workflow will send a "Failure" message to the same requestor upon its own completion. Moreover by using "canned" notification activity we cannot sufficiently "elaborate" on exact cause of failure. As you can imagine this is rather confusing user experience;
You’ve clicked submit, one of the workflows sends you "success" message and few moments/hours/days later another workflow sends you "deny" message. What you going to do? You’ll probably call the help desk and will probably demand an explanation.

Another side-effect of this approval by one stream and denial by another stream is the fact that alias for the group could be wrongfully reserved. Let say that alias was available and workflow that executed alias verification and reservation activities is succeeded, meaning that it went ahead and reserved the alias with our 3rd party web-service. At that point our workflow object is gone. After awhile in another "stream" (with manual approval) we are receiving "deny" for whichever reason. So what is going to happen with the reserved alias? It’s going to remain being reserved. Next time user will try to re-create group with the same name (presumably resolving the problem of denial by approver at the first round) he/she will receive "alias not available" message.

Once again: forget about my private instance of "Alias Reservation" web-service. We could create an XML file on some file share for external system to read or write something to SQL server to trigger some auto-magical transformation of data. The idea is that we’ve called to outside from internal incomplete loop.

The easiest way of fixing this is to merge name validation/reservation and approval activities into single workflow. That will solve possible pong-pong of conflicting notification since they will be executed consecutively and NOT concurrently. However when this is not possible and we have to deal with possibility of several independent workflow threads we will face a challenge of figuring out how to avoid this mis-communication problem as well as false-positive execution/data-delivery to some external system.

Since we’ve decided not to merge authorization workflows (to create one consecutive action after action flow) our next option is to re-think timing of execution of each activity. We need to ensure that certain events are happening at fixed point in time.

As you might know all Authentication workflows will complete their execution before Authorization workflows initialized and all Authorization workflows are executed and completed before all Action workflows. So to insure that notification happens AFTER ALL authorization workflows are completed we have to move notification and actual execution of alias-reservation into the Action phase of the request.

This presents us with another set of tasks:

          By the time request reaches Action phase requested object is actually physically created in the application store of the FIM. So should we detect that object cannot be created for one or another reason we’ll have to ensure that we will delete it before request reached the end of the "pipe"

          We have to store some execution/verification results from Authentication phase into the request object itself, so we can access it during Action phase

First one is relatively easy. FIM product team provided very handy "deleteResourceAction" activity that you can use. It is simple to implement and I’ll leave pleasure of discovering how it works entirely to you.

In the other hand storage of temporal variables between Authentication and Action phases are relatively interesting subject to mention here.

So we have to "tattoo" values into the CurrentRequest object is that we can access them later when they are needed.

Follow me:

          User requests to rename group from ABC to XYZ

          In Authentication phase we can verify that XYZ is available and that all manual approvals are executed successfully

          Since Authorization phase completed successfully. FIM will change the object name to "XYZ" and pass the "currentRequest" object to the next phase: Action

          Since we are bound to "release" previously used alias "ABC", so it can be re-used by others, we have to know this value. However, as you already know object in the app store was renamed to "XYZ" and value of "ABC" was discarded.

Answer is to attempt to store value of "ABC" into the "current request" object while we are in the Authorization phase and still having access to that data. Easier said than done…

My first instinct was to use currentRequest.AddParameter() method to augment the current request with new parameter to hold the variable I wanted to store. That didn’t exactly work — app store thrown an exception and failed or me. After several and several attempts (that would be much much more than one) I’ve discovered that I simply cannot augment the current request in this phase by adding new CreateRequestParameter or UpdateRequestParameter object to my currentRequest.

Technically we cannot alter "currentRequest" object while it is being executed. Hence is the aforementioned controversial status of this solution.

So I’ve tried other way of smuggling data between Authorization and Action phases of the execution.

The answer is: Microsoft.ResourceManagement.WebServices.WSResourceManagement.ResourcePropertyInfo object

I was able to modify this object and store my data in the property info. Here is the method that you might want to use if you are in the same path:

private void AddResourceProperty(string propertyName, string value)

{

ResourcePropertyInfo resourcePropertyInfo = new ResourcePropertyInfo(value, "stringType", false, false);

resourcePropertyInfo.MaxOccurs = 1;

resourcePropertyInfo.MinOccurs = 0;

this.CurrentRequest.ResourceProperties.Add(propertyName, resourcePropertyInfo);

}

It is a little "creative" and perhaps strange to use property description to store data, but, hey! it works! So for all future data smugglers between Authorization and Action workflow in FIM 2010 there is your hidden compartment to stuff your booty.

Conclusion:

I the MIIS/ILM timeframe I had one rule I would not break "No external calls from inside of the code". In FIM timeframe rules are changed. FIM Application store is not state based as Sync Engine was and still it. Making calls to other systems from within a transaction will be reality in many cases. So for MIIS/ILM programmer it could be hard to adjust to this mentality. And for novice FIM 2010 programmer (as all of us are at this point) it is important to remember transaction nature of the Application Store and possibility of multiple threads and paths for a single request during execution. Calls to external systems must be thought-through very well to avoid unexpected loopholes.

By the way, I am utterly amazed that you are still reading this article and not fallen asleep.

Happy coding!

 

Got to have “light” RSS feed reader

I do realize that I’ve came to the blogosphere and RSS feeds WAY behind the curve; however, to my surprise, I’ve discovered that there is still no "dominant" RSS reader platform for .NET

Plenty of players with open source solutions and plenty of "for-profit" solutions, however all of them seemed to be a little over-engineered for my taste. In the end displaying content of XML file should not be hard to do. Right?

It would be nice to have "native" .NET framework object that is wrapping RSS and ATOM feeds. There are examples of such objects in .NET. With framework v3.5 MSFT introduced System.DirectoryServices.AccountManagement namespace which neatly wraps DirectoryEntry object… it is simple(er) to use and doing lots of heavy lifting for you, so why not to create RssEntry object somewhere under System.Net namespace? Where do I cast my vote for this???

So it seems that I’ve got myself a little pet-project for weekend or two. The objective is to cook-up easy to deal with RSS 2.0 feed reader (at least for my own blog) and hook it up to my LostAndFoundIdentity.com site.  I’ve got that poor site static since it was created a year ago and it’s time to give it a little "information jolt".

CodeRush or CrackRush?

CodeRush Xpress 9.1

I have just downloaded CodeRush for VS2008. It is an excellent product for lazy fat-fingering programmer as I am. It creates classes with proper constructors; jumps between camel-case variables (LOVE this one), adds nice thin little line between corresponding open and closed curly-brackets to outline loops, tries, switch cases and other enclosed areas, so you can visually see what exactly your 5th out of 9 closing curly brackets actually closing. Nice!

There are also method extractor, method to field converter and whole slew of other pleasantries.

Take a look: http://tv.devexpress.com/CRX91FeaturesCS.movie

Visual Studio guys should purchase whole thing and bake it into VS2010 SP1 or something.  The only fear I’ve got it that it will be like a super-crack, first use and I’ll be hooked on it and can’t code without it.

Hello Windows 7

I’ve spend about two hours last night watching Microsoft IT image of Windows 7 Enterprise X64 RTM being downloaded, installed and configured on my laptop. It replaced my Windows Server 2008 x64 Enterprise that I’ve lived with for a long while.

So far I am more than happy with 7.

1.       This this going to be my first OS since Windows 2000 Professional (this kind of ages me…) where I am not disabling all graphical add-ons and other "beatifications". They are smooth and not obnoxiously toy-like (Sorry XP and Vista)

2.       It is FAST to boot and VERY fast to wake-up

3.       It is much more "administrator" friendly than Vista. I am still disabling UAC, but now it’s got several levels of UAC. I’ll see if I can live with minimum setting (before OFF) without being annoyed

4.       Did I say that it was FAST?

So far my verdict is – "I would install it on my mother’s computer"  [that is a good thing 😉 ]