Automation of User Enablement in BPOS

If you have been working with BPOS you might take a notice of one peculiar design decision made by Microsoft team. Once user is created by Sync Tool the account is creates in inactive state. For administrator to activate the user (s)he heave to log-in and assign an appropriate license to an account, after which user is actually enabled/activated.

Enable/Disable vs. Activate/Deactivate

This was a bit confusing; to me, but with help from Microsoft support folks (internal contacts are everything!) I’ve figure out the deference between Activation and Enablement in BPOS

Activation is a process of assigning the license to the user account

Enablement is an ability of user to access the account

Activation can only be done only one time; Once license is assigned the mail flow (if that is the case for your user(s)) will begin; By disabling a user account an administrator can restrict user access to the BPOS resources, however mail-flow is not affected by that action.

So what else could we do? Automate!

Well, naturally, as an IdM guy, I was looking into automation of this process. Why would I ask an administrator to log-in and activate an account when we are dealing with automated process? So… several hours of swearing under my breath, I’ve written an Extensible Management Agent to perform one-time-activations of user accounts upon creation. Now I can assign an appropriate license to a user without asking an admin to log-in and do that manually.

End-state scenario looks like this:

  1. User account is created in AD
  2. User account is synchronized into BPOS (in our case I am doing less than 2 min provisioning cycle. Who’s got time to wait for default sync-cycle loop timing)
  3. User account is activated by assigning an appropriate license to it and instantly availabe to user (Deferent license can be assigned based on user’s OU [location] or attribute of your choice in AD)

Voila! Look ma, no hands!

(re)”Hello World”

Fortunately or unfortunately Microsoft has decided to give-up Live Spaces blog engine and migrate everybody onto WordPress. I guess one can look at life as an ongoing migration. So far I am feeling good about this new site. Love all new rich features that are available. And I finally got the ‘statistics’ back (that “Live” team has removed from Live Spaces several months ago)

So here it goes!
“Hello World”
“Over…”

Chasing the new feature

Chasing new Feature

 
As FIM 2010 product wanders into the wild, inevitably more and more Identity Management professionals are working on the projects where they have to make design decisions regarding FIM’s implementation. With an addition of an "App Store" (The Web Portal) programmers are presented with much wider variety of available options. In the last few months I have been seeing, in my opinion, dangerous trend of "chasing the new feature" of the product in favor of more traditional design. Let’s step back and take a look at options that have been available to IdM professionals before FIM 2010 timeframe. I think that clear understanding of features and definition of goals should help an IT professional with identification of appropriate tool(s) to do the job.

Old Times

Since 2003 to 2010 customization of MIIS/ILM-based solution would entail you to write specialized .NET code. There were few predefined places in the product where you could plug your custom code. Most commonly used interface exposed by sync engine was and still is Microsoft.MetadirectoryServices.IMASynchronization;
That interface implementation would provide a programmer with an array of options to control behavior of objects that are coming from a connected directory (including advanced filtering, joining and projection), as well as similar functionality during the export; you could also express your business logic to create new entries in data-sources in the provisioning module(s). So from the year of 2003 to the year of 2010 life of MIIS/ILM professional was relatively easy (or at least well defined and outlined). Once you’ve fully understood the sequence of events and places you can affect the data, you are in the good place. At that point design of the right solution was a matter of properly aligning business requirements and the Sync Engine capabilities: synchronization and provisioning.

Standing by established methodology

Since its conception the Sync Engine was and still is a "state-based" solution, meaning that when we are working in the framework of Sync Engine we are not looking at transactions of objects in each connected data-source, but rather operating with the last-known-state of an entire directory. That rule is an important one to learn and fully embrace. I would argue that most computer systems are not "state-based" and most programming models are based on the assumption that you are working with a transaction. I have seen several projects that I was hired to fix, that were written by seasoned programmers. There were no errors in the code, as such, but rather design/architecture errors deriving from the fact that programmers didn’t take into account the fact that they have been working with state-based system. When "transaction" model is applied to the Sync Engine, it produces rather adverse results. State-base system relies on all of the data being present in the system at the moment when each individual attribute of an individual object is processed. When data is not available a programmer would, naturally, reach-out to an external data-source and access the data suitable for that object. And THAT is a design problem from Sync Engine stand point.
So why the external lookup is so bad, one would ask?

Don’t think outside of the box

Unlike your job interview in the early to mid-90s, when you practically had to say "I am thinking outside of the box" in your resume to be hired (that and an ability to spell word ‘Java’), designing well performing Sync Engine solution was and still is – making sure that you are fitting all data you want to access inside of the box. The Sync Engine box that is. The key word to remember here is "convergence". Data should ‘converge’ when it is fully synchronized and satisfies all business rules/requirements plugged-in into the system. Meaning that all business rules are satisfied and system has no pending changes for the object in question. Therefore having all data "in the box" will make your solution perform better, to be less complex and more manageable.
However unachievable, and "against the grain" some design decision might appear to be, you still should consider every possible option to avoid breaking the rule of "no external calls". So YOU should think outside of the box to make sure that your BOX left to think inside of itself.
Allow me to provide an example. Several years ago I was working on the project where we had to use Unique ID system. This unique ID system was responsible for distributing uniformly formatted IDs to ANY system in the enterprise, regardless of the platform, ownership or any other factor. The ID could be distributed to a system account, an employee, a contractor or an intern. Certain subset of users might never qualify to have an ID in Sync-Engine-managed system, yet they could have an ID in other systems and therefore would have a record in the Unique ID system. The only attributes that system have had exposed to us are ID, isAvaialbe, DateOfAssignment
When I came aboard, on that project, the solution for this "problem" was an external call to the Unique ID system out from Import Attribute Flow in HR Management Agent to get next available ID and mark it as "reserved".
The first problem we have encountered with that design is "orphaned" IDs. Somehow we have "reserved" lots of IDs that have not been actually used by anybody. Troubleshooting revealed that when and if the object would fail to be provisioned, for whichever reason, Sync Engine would faithfully roll-back the transaction, however it would never release the ID that was used during Import Attribute Flow, therefore all consecutive runs will request yet another ID and another and another; as you can guess we have had plenty of "orphaned" IDs at that moment.
I have also seen number of "let’s call out and check for uniqueness" blocks of code within provisioning logic. That kind of practice generally slows down the system to the crawl due to the fact that every synchronization cycle would require system to call-out for every object that passing through the pipe.
If you are still not convinced the most commonly used "trick" of external call is a creation of home directories for roaming profiles. Sync Engine doesn’t come with management agent that would make a file system calls to physically create a "directory" object on the file system. I am not sure why that is, but I suspect that it have something to do with the fact that Microsoft doesn’t use roaming profiles internally. So every time your client asks you to create a directory – you make an external call during export operation. What is the harm is that? Consider following questions: a) Are you creating very first directory on that share? B) What happens with that directory during user de-provisioning? C) What happens if you’ll delete connector space and have to re-provision objects?
As you can see — if you are not managing (really managing) the directory, the record, the row, the ID, or whatever that is you are calling out for – you can’t guarantee convergence of the data, and therefore your solution have greater chance to fail or perform poorly under stress or during disaster (exactly at the time when you would want system to perform as reliably and as fast as possible)

Applying existing patterns on FIM portal

In my IdM 101 presentations to the clients I was often calling the ability to use code in Sync Engine "product’s greatest strength and product’s greatest weakness". With introduction of ‘The Portal’ that statement is more true than ever.
By contrast with the Sync Engine The Portal is a transaction-based system. It is "married" with the state-based Sync Engine by means of "special" management agent that is not quite the same as other management agents. Sync Engine is a delivery vehicle for The Portal and integral part of the product.
In the past few month I have observed a trend of using Portal for operations that is should not be used, in my honest opinion. I was talking with one of consultants in Europe and have heard that instead of trying to create an object with the Sync Engine (as it should be done for all managed objects), "we had one of our guys to write us a Workflow that would call Power Shell that would just create the object on the system for us". Frankly, that particular conversation generated the blog entry you are reading.
I believe that the problem comes with perception that "Hey! I am transaction-based! I can do whatever I want to do. And by the way – look, I can stick my code in this new place called "workflow". Exciting!"… And that is true statement. Portal provides more places to "stick" your custom code than Sync Engine dreamed of. You have several types of workflows, UI, etc. So what is the problem with that?
The problem is that we need to keep in mind that we are implementing an Identity Management solution(s), and not the chasing the most adventures way of creating new software. I am sure it’s cool to write a Workflow in .NET 4.0 with Visual Studio 2010 which will call PowerShell 2.0 which will user WinRM 2 to perform some wonderful operation right after user clicked the submit button. In fact it could be very well justified thing to do, but one should not forget about the data convergence paradigm.
Making external calls form portal is no deferent than making an external calls form the Sync Engine. Yes you can do it, but should you? Discarding previously accumulated knowledge and experience from your MIIS/ILM days is careless. Yes, your toolbox has expanded; your ability to execute some tasks right after your user clicks the submit button doesn’t change your goal to achieve seamless management of identity; the best way to do it is to make sure that your data is fully managed. Your design patterns should follow that rule and find best possible solution(s); even if it is not using trendiest technologies of this month.
Sync Engine is the most mature part of the product; it is a delivery system for your managed objects. There is no shame in using it to the fullest possible extent. Don’t flirt with your data – own it. That might culminate in you writing an extensible management agent, creation of new Metaverse object type, analyzing expected rule entry object of FIM, or configuration of an additional out-of-the-box management agent instance(s) to bring an object/attribute into the realm of managed identity. The overhead in time that you might spend upfront in doing so will pay-off during update of the system, during disaster recovery scenarios or troubleshooting.

How to decide which tool/method to use

The rules that I’ve discerned for myself in the last two years of working with FIM are simple. They are based on two assertions:
a) Portal is a customer-facing workflow-driven application.
b) Sync Engine is the delivery vehicle.
Do I need to persistently manage the object at all times (disaster recovery included) —  it’s a Sync Engine’s job
Do I need to make a decision on whether to allow or deny access to a particular user request — it’s a Portal’s job
And ‘yes’, there are plenty of "gray areas" and ‘no’ there is no definitive answer for every solution, nevertheless these rules helped me in navigating through the architectural decision making process.
I hope this "speaking out loud" entry will help you too
 

BPOS PCNS Extension

Lately I have been involved in a lot of internal Microsoft BPOS activity. For people who have not heard of BPOS it is: Business Productivity Online Suite. Basically it is Microsoft servers such as Exchange, SharePoint, Communication Server, etc. that are hosted by Microsoft in Microsoft’s data-center and sold to business as a service vs. as a software/product. No need for hardware, no need for upgrades and maintenance.
Schakra has embraced ‘the cloud’ and bravely moved all our internal mailboxes to BPOS. As a Microsoft partner that offers BPOS deployments to customers this was a necessary move. Now we can experience what our customers are experiencing and gain valuable first-hand expertise.
The very first thing that I have noticed after the migration was completed is that now I’ve got two passwords to worry about. One for my local AD and another for BPOS cloud resources. BPOS comes with rich SSO client with attempts to manage your credentials and re-configures your rich applications such as Outlook and Communicator, however when you are going to web resource – you are on your own. You got to type your login and password assigned to you. Out IT guys were not exactly a happy bunch, when users began to ask to reset local and cloud passwords. Technically almost all time they have saved on not managing local exchange server they were losing on ad-hoc password resets. We have plenty of users that are working remotely, some are VPNing, some are joined to client’s domains… so as you can imagine adding another variable to password management is no an ideal place to be in.
Being an IdM guy I could not live with that. My researched indicated that there is no products that would employ standard PCNS (Password Change Notification Service) that would synchronize on-premise AD passwords with the cloud BPOS. What else could I do, but to write one!
For several days Schakra’s internal population is happily using BPOS PCNS extension; we would be happy to help any BPOS customer with your password synchronization issues.
BPOS PCNS extension installs onto your existing BPOS directory synchronization box and does not require any custom code on your domain controllers, nor a web-service of any kind or a separate physical or virtual host. It simply augments your existing BPOS installation and synchronizes your AD passwords to BPOS passwords 1 to 1

Live@edu OLSync on FIM 2010

This week I have completed yet another Live@edu engagement with FIM 2010. It appears that here in Schakra we are receiving more and more requests form Live@edu customers who are wanting to use FIM 2010 as a platform for Live@edu instead of ILM 2007.

Why FIM 2010 is not offered yet to Live@edu folks?

I have been asked by end-customers why Microsoft is not offering Forefront Identity Manager 2010 within Live@edu program.
The answer is rather simple. Live@edu is a marketing program that is offering variety of Microsoft products to educational sector for free. Most notable that is hosted Exchange 2010 solution; however there is plethora of other products that Microsoft is packaging under Live@edu umbrella. SharePoint Server, Online version of Office 2010, SkyDrive, Spaces, etc. As you can guess Live@edu team is NOT the owner of all those technologies. Each technology belongs to a team that develops and supports it; Exchange 2010 is naturally belonging to Exchange team, SkyDrive and Spaces are Windows Live team, and so on.

So why is it still ILM 2007? Answer is that when Exchange team started development of ELMA (thereafter OLMA) then GALSync and finally OLSync (finally for summer 2010 that is) when there was no FIM 2010 in site. Back two years ago when ELMA 1.0 was on the design board (I was part of the ELMA 1.0 team) the name of “Forefront Identity Manager” was not even conceived yet; it was ‘ILM 2’ at the time with no defined released date and no clear upgrade path available in writing. On top of that, as you know, Microsoft offers full-fledged Premier Support for all Live@edu customers (which is rather amazing, considering that this is free offer); So for “mother-ship” to offer something like that, it would take a lot of confidence in the product, and therefore offered solution got be tested and over-tested and tested again… hence the lag with the offer of FIM 2010 to Live@edu customers.

In the meanwhile, you can rely of Microsoft partners such as my company Schakra. Deconstructing OLSync and reconstructing it on FIM 2010 is something we certanly can offer. If you need/want your Live@edu or custom OLMA (Outlook Live Management Agent) solution running on Microsoft Forefront Identity Manager 2010, give us a call, we’ll be happy to help you setting things up and supporting it.

Auxiliary MA alternative

Auxiliary MA alternative

Recently I have published a Metaverse Router project on CodePlex. This project allows MIIS/ILM/FIM Synchronization engine to operate with discrete provisioning modules vs. monolithic provisioning DLL that would serve dissimilar connected directories.

As one of the benefits of Metaverse Router you can enable/disable ‘scripted’ provisioning in your Sync Engine without actually modifying server configuration. It is also possible to enable and disable provisioning of individual modules, if you wish.

During work with one client of mine it dawned on me that this provisioning disablement could be performed in mid-run of the synchronization cycle. Why is this important?

If you are familiar with a concept of Auxiliary MA you know that Sync Engine could have a configuration challenge preventing object to be provisioned into one of the systems due to an existing object with an identical distinguished name being present in that system. The proposed solution is called Auxiliary Management Agent. Auxiliary MA is a basic text (or any other default type) management agent, which depends on the sequence of synchronization execution and allows provisioning code to execute successfully by provisioning an "auxiliary" object first, which would allow (pre)existing object to join to the Metaverse; thereafter auxiliary CSEntry ‘self-destroys’ when it is no longer needed. I encourage digging MSDN for more information. Auxiliary MA can be conceptually ‘dry’…

Nevertheless, having an additional MA and introducing additional provisioning code is not something I would like to do, when it can be avoided. So to resolve mentioned above provisioning issue without introduction of an additional MA we can simply disable provisioning in the Metaverse Router with the script during the run of the Sync Engine. Disabled provisioning will allow for projection and joining processess to happen without provisioning code being executed at first, which in return will solve the "auxiliary" problem. Thereafter your script could re-enable provisioning and voila – no Auxiliary MA needed.

I will be working on VB and PowerShell scripts to complement Metaverse Router on CodePlex

Happy coding!

 

Automating installation of custom FIM workflow assemblies during development cycle

Last night I was chatting with my ex-co-worker, who just dived-in the world of FIM development. He have had several questions about workflow development for FIM. I have pointed him to my FTE Owner Requirement workflow mentioned before in this blog, and published on CodePlex to serve as an example of FIM workflow for people who are starting out FIM coding.

One of the things that I found annoying, and therefore worth automating is the "deployment" of your activity on the system. When you are working on workflow, and especially when your workflow has a visualization (Admin UX) class, you need to do many routine "moves" before you can successfully attach to the process for debugging. So after your DLL is compiled you need to

  1. Remove previously GACed library form Global Assembly List
  2. GAC your new assembly
  3. Copy assembly to "Portal" folder (along with your symbol’s file)
  4. Restart FIM Service
  5. Restart IIS (if you are writing Admin UX, and not using XOML)

In my book, this counts as tedious routine, especially when you need to do this time and time again over the development cycle.

So, I’ve wrote rather basic CMD file to automate this routine. It is parameterize to allow multiple DLLs to be deployed with the same script. Script uses GACUtils.exe to work with Global Assembly Cache. Utility comes with .NET SDK, I think, and uses dependency library – msvcr71.dll. To simplify everybody’s life I’ll include both in this ZIP file.

As an alternative you might want to consider "converting" this script logic to be a post-compilation job in Visual Studio. Personally, I have found that adding this script as post-build operation is a little bit time-prohibitive, since restarting FIM and IIS services takes some time, however you might want to think about it.

 

It is worth mentioning that if you are intending to leave your activity behind you might want to make your administrator’s life easier by writing an MSI package that would deploy your custom activity on the production FIM portal. Even though this post provides simple command-line script to deploy your activity during development cycle, mentioned above FTE Owner Requirement activity on CodePlex contains WIX installer project that will reliably deploy your activity on any FIM 2010 portal without you explaning where to place your DLL, how to GAC it and else you need to do with it.

Happy coding!