A couple of months ago, I wondered whether I had any use for a Kindle Fire, and concluded that while it was an intriguing device, I probably didn't need another tablet in my life. Well, it turns out I was wrong. A few days ago, Santa came early with an Amazon "smiley box" containing a brand new Fire. Within ten minutes of ripping open the box, I was hooked.
I've owned an original iPad for over a year now, and use it mainly for watching movies on planes, reading eBooks on the Kindle app, playing Angry Birds when I need to take out my frustration on defenseless cartoon piglets, watching streaming media on Netflix, Hulu and SlingPlayer while on the road, and taking notes in meetings. The iPad hasn't quite replaced the need for a laptop yet, but I've certainly made good use of it. So it never occurred to me that there was room in my gadget-filled life for another tablet. That was, until I opened the smiley box.
Like the iPad, the Fire "just works" out of the box. No instructions are needed, there are no complicated setup procedures; you just turn it on and go. If you don't know what the Kindle Fire is for, the simple home screen menu says it all (Music, Video, Apps, Books, Newsstand, Docs, Web). No ambiguity there. Simply go through a simple Wi-Fi configuration screen, enter your Amazon ID to register the device, and you immediately have a wealth of media at your fingertips.
Make no mistake, media consumption is what the Kindle Fire is all about; specifically, consumption of media from the Amazon ecosystem. The first thing that impressed me was that every streaming movie I have ever purchased from Amazon (usually through my TiVo box) was immediately available in my Video library. I'd forgotten that on a cold, rainy day about five years ago, I'd paid ten bucks to stream Love Actually from Amazon (don't say a word!), but there it was, sitting right in the library.
Needless to say, every eBook I'd ever purchased on my iPad Kindle app was also there. And I quickly discovered that by connecting the Kindle Fire to the USB port on my laptop, I could transfer MP4 movies to the device for subsequent viewing. Yes, I know the iPad also allows you to do that, but the Fire doesn't require a clumsy, heavyweight app like iTunes to synchronize files; it operates just like a flash drive, so transferring media is simply a matter of copy and paste. Gotta love that!
The OS is based on a custom implementation of Android 2.3 Gingerbread. While it is heavily "Amazonized", there is no hiding the fact that an Android kernel lurks under the hood, and I don't necessarily mean that in a good way. No matter how much you try to put lipstick on that Angry Birds character, Android lacks the polish, elegance and responsiveness of iOS. Functionally, it works great, but if you have been spoiled by iOS, the jerky scrolling and occasionally erratic keyboard can get a little tiresome.
Perhaps the weakest feature of the Fire is the Silk web browser, which I found to be painfully slow, especially when loading Javascript-intensive webpages. Granted, the iOS implementation of Safari is no speed demon, but compared to Silk, it is like putting a Formula 1 car up against a 1987 Chevy Nova. I'm not sure whether this is due to the Silk application itself or the Fire's low-end hardware spec, but considering that the Fire packs a not-too-shabby dual-core OMAP 4 1GhZ processor, I suspect it has more to do with the former.
I found it strange that the home screen menu does not contain a link for email; instead, the email client is buried under Apps. The email client offers a basic set of functionality but does what it is designed to do and worked fine with my work and personal GMail accounts, aggregating all of my mail into a "Unified Inbox". In many ways, I prefer it to the iOS mail client.
Speaking of apps, the Amazon App Store offers a surprisingly large and growing range of them. Within minutes of opening the device, I had installed Netflix, Pandora, Facebook and Hulu Plus. There wasn't a Kindle version of SlingPlayer yet, but I was able to obtain the generic SlingPlayer app for Android and then install the package using the ES File Explorer app. Simple. And yes, Angry Birds is available too.
So what of the device itself? Given that my tablet experience so far has been limited to the 9.7" iPad, I found the 7" form factor of the Kindle Fire refreshing. There is something nice about being able to comfortably hold a tablet in one hand, particularly when reading an eBook or magazine. Strangely, the smaller form factor isn't as constraining as I expected it to be; writing emails using the onscreen keyboard is no more cumbersome on the Fire than on the iPad. If anything, it is slightly easier, since the 7" form factor allows you to thumb-type in much the same way as one would on a smartphone.
The most pleasant surprise was the display, which I expected to be slightly below par given the Fire's price point. Colors are rich and vivid, contrast is outstanding, and videos are razor sharp. Some may disagree, but the Fire's display is more than a match for the iPad. And the embedded speakers are, if anything, superior to those on the iPad with a slightly broader volume range and less tinniness.
That is not to say the Kindle Fire is perfect, by any means. The omission of volume buttons is extremely puzzling, given that the device was clearly designed for media consumption. To compound matters, the onscreen volume slider is always in a different place depending on the app you are using. As for capacity, the miniscule 8Gb storage doesn't offer much room to store music or videos. Yes, I know that Amazon's cloud infrastructure theoretically reduces the need to "store" media, which is fine if you always have a Wi-Fi connection. But for a road warrior like me, Wi-Fi isn't always an option. Not all planes are Wi-Fi enabled, and have you ever tried streaming media using a hotel Wi-Fi connection? Still, the lack of storage isn't a dealbreaker, given how easy it is to transfer files back and forth over USB.
If the Kindle Fire were priced similarly to the iPad, any of these shortcomings would be enormous. But it isn't, not by a long shot, and that is the point. Given the Kindle Fire's $199 price tag, what might be fatal shortcomings for an iPad or similarly priced tablet are trivial gripes in this case.
But the real test for me is how frequently will I use the Fire compared to the iPad. Well, after two weeks with the Fire, I've learned that it depends on the use case. If you read a lot of eBooks and eMagazines, then the Kindle Fire is a superior option given its weight and form factor. For movies, there isn't much to choose between the two; the Fire is great if you don't mind a slightly smaller screen, and it compensates for this with superior sound quality. Given those two factors alone, I have found myself more inclined to use the Fire when on a plane, grabbing a latte at Starbucks, or just catching up on some late night reading.
But the Fire isn't a productivity device like the iPad. I could not imagine myself using it to take meeting notes, fire up a quick spreadsheet, edit a slide deck or even act as an RDP thin client to access my remote servers. Then again, Amazon didn't design the Kindle Fire to be a productivity device. They clearly built it to provide a portal into the Amazon media ecosystem. The device ships with a free 30-day subscription to Amazon Prime, which in addition to free two-day shipping, provides access to a range of free streaming movies and TV shows. While this library isn't as extensive as that of Netflix, it seems to be growing rapidly and you also have the option of renting or purchasing more recent movies directly from the device.
So just when I thought I had all the gadgets and devices in my life that I could handle, the Kindle Fire has found a niche that I didn't even know existed. I may not have needed one, but iPad or not, I will certainly make good use of it. Even had it not been a Christmas present, the $199 price tag is a bargain. For consumers who want a tablet but cannot justify the $499 entry point for an iPad, the Kindle Fire offers a compelling alternative. It may be less than half the price of an entry-level iPad 2, but certainly offers more than half the functionality and features.
Toby Emden's existential journey through the world of Identity and Access Management
Wednesday, December 21, 2011
Fired Up About the Kindle Fire
Monday, December 19, 2011
Gartner 2011 Magic Quadrant for IAG: Tight Competition in a Maturing Industry
Gartner has just released its first ever Magic Quadrant for Identity and Access Governance (IAG), and SailPoint appears to have emerged as a narrow victor. Unlike the Forrester Wave for IAG published earlier this year, which showed SailPoint and Aveksa leading the competition by a mile, Gartner has 4 of the 7 evaluated offerings in the top quadrant, with CA on the borderline.
Admittedly, I'm more familiar with SailPoint IdentityIQ and Oracle Identity Analytics than any of the other products featured here. Both are extremely mature, versatile offerings that are simple to deploy and provide a full range of access governance capabilities, including role analytics, detective/preventative policy enforcement and certification/remediation. From a pure governance perspective, there isn't much to choose between them.
SailPoint's acquisition of BMC's identity management offering and their incorporation of BMC's provisioning engine into IdentityIQ means that they are no longer a pure play IAG vendor, and can compete on equal terms in the IAM space with the likes of IBM, Oracle, CA and Microsoft. In recent months, SailPoint has also been adding provisioning capabilities to their extensive range of native governance connectors, which ship with the product. Of course, OIA also offers provisioning capabilities, but only when integrated with OIM or another supported provisioning product.
As I've noted before, the maturation of identity management is driving a trend away from bottom-up identity administration tools, towards more holistic, governance-based solutions. Visionaries such as SailPoint have long anticipated this evolution and are ideally positioned to take advantage of it as organizations become more sophisticated in how they approach identity governance.
The decision by both Forrester and Gartner to begin publishing IAG market analytics in 2011 acknowledges the increasing maturity of IAG. The explosive growth being experienced by Aveksa and SailPoint offers further validation, if any were needed, of this trend.
The question is, what does this mean for identity management practitioners?
Well, as one might expect, there is good news and bad news. The bad news is that it is no longer sufficient to be a technical whizz who can develop advanced customizations, custom connectors and sophisticated workflows in their product of choice. As IAM/IAG offerings become easier to deploy, offer richer, more business-friendly functionality, and adopt a less I.T.-centric approach, I expect the demand for advanced technical customizations to diminish. The good news is that the increasing maturity of these offerings will allow identity management professionals to spend less time focusing on arcane technical integrations and more time devising robust, governance-centric solutions for customers.
I'm not saying that the market for identity administration tools will go away. Of course it won't; there will always be a demand for automated provisioning. But identity administration is becoming more commoditized, and over time I expect it to be increasingly viewed as just one component of a more holistic IAG framework. Just because the tools are becoming more sophisticated doesn't mean that the operational challenges that create a need for streamlined identity administration are going away.
The real implication is that IAM/IAG solutions are evolving from mere I.T. tools into corporate governance suites, which in turn suggests that the target audience for such offerings is increasingly likely to be a CTO, CISO or CIO than the Manager of Enterprise Applications. For identity management professionals, this means that the ability to articulate business value to a non-technical audience, deliver policy-driven solutions and demonstrate a sophisticated awareness of the regulatory landscape will become just as important as the ability to create a kick-ass workflow. Individuals with that balance of soft and hard skills are difficult to find, and will therefore remain in extremely high demand.
Which, in my opinion, is exactly how it should be.
SailPoint's acquisition of BMC's identity management offering and their incorporation of BMC's provisioning engine into IdentityIQ means that they are no longer a pure play IAG vendor, and can compete on equal terms in the IAM space with the likes of IBM, Oracle, CA and Microsoft. In recent months, SailPoint has also been adding provisioning capabilities to their extensive range of native governance connectors, which ship with the product. Of course, OIA also offers provisioning capabilities, but only when integrated with OIM or another supported provisioning product.
As I've noted before, the maturation of identity management is driving a trend away from bottom-up identity administration tools, towards more holistic, governance-based solutions. Visionaries such as SailPoint have long anticipated this evolution and are ideally positioned to take advantage of it as organizations become more sophisticated in how they approach identity governance.
The decision by both Forrester and Gartner to begin publishing IAG market analytics in 2011 acknowledges the increasing maturity of IAG. The explosive growth being experienced by Aveksa and SailPoint offers further validation, if any were needed, of this trend.
The question is, what does this mean for identity management practitioners?
Well, as one might expect, there is good news and bad news. The bad news is that it is no longer sufficient to be a technical whizz who can develop advanced customizations, custom connectors and sophisticated workflows in their product of choice. As IAM/IAG offerings become easier to deploy, offer richer, more business-friendly functionality, and adopt a less I.T.-centric approach, I expect the demand for advanced technical customizations to diminish. The good news is that the increasing maturity of these offerings will allow identity management professionals to spend less time focusing on arcane technical integrations and more time devising robust, governance-centric solutions for customers.
I'm not saying that the market for identity administration tools will go away. Of course it won't; there will always be a demand for automated provisioning. But identity administration is becoming more commoditized, and over time I expect it to be increasingly viewed as just one component of a more holistic IAG framework. Just because the tools are becoming more sophisticated doesn't mean that the operational challenges that create a need for streamlined identity administration are going away.
The real implication is that IAM/IAG solutions are evolving from mere I.T. tools into corporate governance suites, which in turn suggests that the target audience for such offerings is increasingly likely to be a CTO, CISO or CIO than the Manager of Enterprise Applications. For identity management professionals, this means that the ability to articulate business value to a non-technical audience, deliver policy-driven solutions and demonstrate a sophisticated awareness of the regulatory landscape will become just as important as the ability to create a kick-ass workflow. Individuals with that balance of soft and hard skills are difficult to find, and will therefore remain in extremely high demand.
Which, in my opinion, is exactly how it should be.
Friday, December 9, 2011
Access Rights versus Access Usage
Having been in the IAM space for enough years to remember when the idea of a metadirectory was still "cool", I spend a lot of time thinking about where the industry is going, and more specifically, how we can enhance the value that IAM brings to our customers. Recently, one such customer articulated a requirement for an identity governance solution that not only provided them with a global view of access privileges, but allowed them to see who was accessing a particular server or file share.
It occurred to me that most modern identity solutions do a great job of providing a view into who has access to what, but not what they are actually doing with that access. These are two completely different concepts, but from a technical perspective, they don't necessarily need to be.
A standard identity management suite ships with a connector framework that exposes a common interface to abstract the logic that interacts with the target system. These interactions generally comprise standard IdM events such as account creation, modification, enablement, disablement, retrieval and deletion. Obviously, each connector "type" is required to invoke native API calls.
Since vendors are already building connectors that manage accounts across a wide range of target systems, how difficult would it be to extend these connectors to inspect logs on those same systems, and then correlate each log entry to an identity object in the same way that we already do for native accounts? For example, in addition to pulling a list of local accounts from a UNIX server and correlating them to a person's identity, a UNIX connector could expose a method to pull the server logs and update identity records with information about what actions each user had been performing on that server.
It's just a thought, but it strikes me that being able to see who is doing what is as foundational to robust identity governance as being able to see who has access to what.
It occurred to me that most modern identity solutions do a great job of providing a view into who has access to what, but not what they are actually doing with that access. These are two completely different concepts, but from a technical perspective, they don't necessarily need to be.
A standard identity management suite ships with a connector framework that exposes a common interface to abstract the logic that interacts with the target system. These interactions generally comprise standard IdM events such as account creation, modification, enablement, disablement, retrieval and deletion. Obviously, each connector "type" is required to invoke native API calls.
Since vendors are already building connectors that manage accounts across a wide range of target systems, how difficult would it be to extend these connectors to inspect logs on those same systems, and then correlate each log entry to an identity object in the same way that we already do for native accounts? For example, in addition to pulling a list of local accounts from a UNIX server and correlating them to a person's identity, a UNIX connector could expose a method to pull the server logs and update identity records with information about what actions each user had been performing on that server.
It's just a thought, but it strikes me that being able to see who is doing what is as foundational to robust identity governance as being able to see who has access to what.
More Thoughts on Cloud Identity
Unfortunately, I haven't had much time to blog lately, as November was essentially a wash. First, we were left without power for a couple of weeks by the epic Halloween storm that crippled much of New England early in the month. Then I took off on my annual pilgrimage to visit relatives in England. And then, of course, it was Thanksgiving. Next thing I know, the malls are filled with Christmas shoppers and the drone of those annoying seasonal tunes to which we are thankfully subjected for only a few weeks a year.
Anyway, I'm back now. While catching up on my favorite blogs, this comment on cloud-based identity by Sean O'Neill---who I consider to be one of the best minds in the business---caught my eye, mainly because it echoes my own sentiments on the topic.
Sean notes that from a technical perspective, there is nothing earth-shatteringly new about cloud computing itself, except that we now give it a fancy name. No argument from me there. But his more important point is that the increasing adoption of cloud services creates a whole new set of governance headaches for CIOs. To illustrate this point, he quotes the CIO of a major insurance company:
As Sean explains:
The increased adoption of cloud computing is inevitable, but it is both reckless and unrealistic to view IAM as just another one of those services, particularly for large organizations.
Currently, everybody seems to be focused on the idea of moving IAM tools to the cloud (which, by the way, does nothing to alleviate the process and governance issues that make IAM projects so notoriously complex---it simply moves these problems somewhere else). Instead of viewing IAM as just another commodity, we should be thinking about how to help organizations evolve robust governance strategies for managing cloud identities. I know this isn't as sexy as the idea of cloud-based IAM products, but it is far more relevant to the average CIO.
Anyway, I'm back now. While catching up on my favorite blogs, this comment on cloud-based identity by Sean O'Neill---who I consider to be one of the best minds in the business---caught my eye, mainly because it echoes my own sentiments on the topic.
Sean notes that from a technical perspective, there is nothing earth-shatteringly new about cloud computing itself, except that we now give it a fancy name. No argument from me there. But his more important point is that the increasing adoption of cloud services creates a whole new set of governance headaches for CIOs. To illustrate this point, he quotes the CIO of a major insurance company:
“One thing I have come to realize is that when I move my application to the cloud, all of the security of my networks and firewalls that I have invested in over the years disappears. The only defense I have left is identity and data security in the application”In my experience, that sentiment is probably not unusual among CIOs and CISOs, particularly in highly regulated verticals such as financial services, pharmaceuticals and healthcare. Entrusting identity management to the cloud may seem like a good idea to analysts, vendors and techies, but they are not the ones who would be laying awake at night, worrying about the legal and regulatory implications of their cloud identity provider suffering a catastrophic breach.
As Sean explains:
Even if you can sue the pants off of your cloud provider, the basic problem is a breach would have occurred and your people are not involved at the security level.In other words, if you are a CIO and sensitive personal information about your customers and employees is leaked due to a breach at a third-party identity provider, the victims aren't going to give you a pass because you entrusted security to a cloud service. If anything, they will hold you even more liable for gross negligence. Not to mention that "not my fault" will do nothing to mitigate damage to your company's brand reputation.
The increased adoption of cloud computing is inevitable, but it is both reckless and unrealistic to view IAM as just another one of those services, particularly for large organizations.
Currently, everybody seems to be focused on the idea of moving IAM tools to the cloud (which, by the way, does nothing to alleviate the process and governance issues that make IAM projects so notoriously complex---it simply moves these problems somewhere else). Instead of viewing IAM as just another commodity, we should be thinking about how to help organizations evolve robust governance strategies for managing cloud identities. I know this isn't as sexy as the idea of cloud-based IAM products, but it is far more relevant to the average CIO.
Saturday, October 29, 2011
Is Logical/Physical Convergence the Next Big Thing in IAM?
Traditional identity and access management describes the processes, systems and policies that are used to identify individuals to information systems, control what privileges they have, and what they are doing with those privileges. By now, this is all well understood.
But why should IAM be restricted only to systems? For most large organizations, the need to control physical access to secure facilities and locations is every bit as relevant to a holistic security strategy as controlling access to applications and data. Corporations spend millions of dollars on physical security systems such as CCTV, electronic barriers and identification cards, designed to prevent unauthorized personnel from accessing restricted areas. Typically, these systems are centrally managed by a Corporate Security department, so that individuals are only granted the physical access required of their jobs, and any attempted security breaches can be immediately detected.
You see where I'm going with this, right? Conceptually, physical security represents the most basic IAM use case. One could almost think of buildings as applications and areas within those buildings as fine-grained entitlements, and the paradigm for physical security is identical to that of I.T. security. Once you make that connection, there is no reason why physical security could not be assigned using RBAC. Indeed, most commercial physical security systems already provide for role-based access.
Yet, for some inexplicable reason, physical and logical security are still generally viewed as two separate disciplines, which makes no sense to me. I suspect this is because in most organizations, I.T. Security and Corporate Security are distinct organizations, and there is often very little cross-pollenation between the two. IAM vendors, for their part, clearly haven't felt any pressing need to address this gap, and still focus almost exclusively on managing access to I.T. systems.
Some of the advantages to logical/physical convergence seem glaringly obvious:
From a technical standpoint, supporting logical/physical convergence should not require a significant re-engineering effort for most IAM vendors. So I find myself asking, why hasn't this happened so far, and who will be the first to address the gap?
But why should IAM be restricted only to systems? For most large organizations, the need to control physical access to secure facilities and locations is every bit as relevant to a holistic security strategy as controlling access to applications and data. Corporations spend millions of dollars on physical security systems such as CCTV, electronic barriers and identification cards, designed to prevent unauthorized personnel from accessing restricted areas. Typically, these systems are centrally managed by a Corporate Security department, so that individuals are only granted the physical access required of their jobs, and any attempted security breaches can be immediately detected.
You see where I'm going with this, right? Conceptually, physical security represents the most basic IAM use case. One could almost think of buildings as applications and areas within those buildings as fine-grained entitlements, and the paradigm for physical security is identical to that of I.T. security. Once you make that connection, there is no reason why physical security could not be assigned using RBAC. Indeed, most commercial physical security systems already provide for role-based access.
Yet, for some inexplicable reason, physical and logical security are still generally viewed as two separate disciplines, which makes no sense to me. I suspect this is because in most organizations, I.T. Security and Corporate Security are distinct organizations, and there is often very little cross-pollenation between the two. IAM vendors, for their part, clearly haven't felt any pressing need to address this gap, and still focus almost exclusively on managing access to I.T. systems.
Some of the advantages to logical/physical convergence seem glaringly obvious:
- Reduced TCO achieved by centralized management of all access policies, instead of having to maintain separate systems and processes for logical and physical access.
- Greater compliance with regulatory mandates such as HSPD-12, GLBA, SOX and FIPS-201.
- Streamlined offboarding processes, which is particularly important when dealing with a sensitive termination scenario.
- Smartcards can serve a dual purpose of granting physical access to corporate facilities and providing a second factor of authentication to sensitive systems and data.
From a technical standpoint, supporting logical/physical convergence should not require a significant re-engineering effort for most IAM vendors. So I find myself asking, why hasn't this happened so far, and who will be the first to address the gap?
Labels:
Access Management,
Compliance,
Identity Governance,
Identity Management,
Physical Security,
Security
Friday, October 28, 2011
Is SPML Really Dead?
Over the past year or two, there has been an intense debate in the IAM community about the future viability of SPML. This began with a blog post by Burton Group's Mark Diodati in February 2010, in which he offered a bearish perspective on the poorly adopted provisioning standard.
I agree with many of the points Mark raised. Yes, SPML is too complex in its current incarnation. In trying to offer a broad range of functionality and account for every conceivable provisioning scenario, it often ends up accomplishing quite the opposite. Accordingly, SPML implementations tend to be extremely product specific, which defeats the entire purpose of a common standard in the first place. He is also correct that SPML implementations often cause performance issues due to the burden they place on connectors, although I would argue that this is less attributable to the standard itself than to how SPML has been implemented by provisioning vendors. And I completely agree with his assertion that enhancing the standard with a common inetorgperson-like user schema such as "SPMLPerson" would not only promote greater convergence with SAML, but would lead to increased adoption in the enterprise.
With that said, SPML works extremely well when implementations avoid connector-heavy and product-specific customizations (I'll come back to that shortly). In my opinion, the lack of SPML adoption isn't just because the standard itself is deficient, but because nobody has quite figured out the best way to implement SPML interfaces.
The most common use case I've encountered for SPML is when a customer wants to integrate their own proprietary UI with a commercial provisioning system. For instance, one of my customers has a sophisticated, AJAX-enabled access request portal built in .NET. It provides all the workflow functionality that one would expect to find in a commercial provisioning system, but was designed simply to manage requests and approvals rather than to automate provisioning. The customer had acquired Sun Identity Manager for provisioning automation, but didn't want to replace their elaborate, business friendly .NET interface with SIM's antiquated UI (and who can blame them?) In this particular instance, SPML was the obvious solution. The .NET application was modified to simply fire off SPML requests to SIM on the backend, thus automating provisioning actions while protecting end users from any visible impact.
In scenarios such as this, where a homegrown access request portal needs to integrate with a provisioning system, SPML works like a charm. But let's face it, this scenario doesn't require SPML specifically. In the use case I just described, Sun Identity Manager could have exposed any old SOAP or REST interface, and the integration would have been exactly the same, providing that the interface exposed the requisite methods.
The true value of a standard such as SPML is realized when providing interoperability with enterprise or cloud applications, and this is where SPML has unequivocally failed to deliver.
Which brings us back to the problem with SPML implementations. Because the standard is so broad and, unlike SAML, fails to define even a minimal schema for user profiles---much less offer a reference implementation---identity vendors have tended to implement variations of SPML that are product specific and are tightly coupled with the connector layer and underlying data stores. The result is that most SPML implementations are ugly and XML-heavy, which in turn can lead to performance issues when performing large numbers of provisioning transactions.
I have yet to see an SaaS application that supports inbound SPML messages in a way that could be called "standard". Likewise, I have yet to see a user provisioning solution that supports SPML in a non-proprietary fashion. As a result, provisioning solutions for cloud applications still depend heavily on individual connectors that all implement different product APIs, which is to say that SaaS applications are currently treated no differently from any other kind of application.
So the question remains, is SPML really dead? Well, in its current incarnation, it's hard to argue that it was ever really alive to begin with. SPML 2.0 is now five years old, and the OASIS Provisioning Services Technical Committee hasn't convened since 2008. But the critical need for interoperable provisioning is now greater than ever, particularly considering the explosive adoption of cloud applications in recent years. The fact that most identity management RFPs include a requirement for SPML confirms that standards-based integration remains a critical priority for organizations. Even as IAM maturation drives us away from traditional user-centric IAM towards more business-centric models, identity management projects still spend far too much time focusing on provisioning, and specifically on the nuances of proprietary connector architectures; this is another argument for a widely adopted provisioning standard that abstracts the complexity of underlying systems and data stores.
If SPML is to survive, then it will have to be reincarnated without any consideration to backwards compatibility. For that to happen, a major player in the identity space (I'm talking about the likes of Oracle, CA or IBM here) needs to step up, take leadership and create some momentum for a new standard.
In the meantime, there are several proposals out there to compensate for the lack of momentum surrounding SPML and the critical needs that it was intended to address. Jim McGovern, never a shrinking violet, has a wish list for what he would expect to see from SPML 3.0, if it ever happens. In 2009, the visionary Nishant Kaushik proposed using OAuth to support federated provisioning needs. The SimpleCloud (SCIM) initiative----which enjoys participation from various cloud providers such as Google, Salesforce as well as identity vendors such as UnboundID and SailPoint----appears to have embraced this concept and taken it to another level. For my money, SCIM is the most promising candidate for a new provisioning standard. Seemingly inspired by the LDAP standard, SCIM also leverages SAML, OAuth 2.0 bearer tokens, OpenID and JWT, while favoring a REST API over less efficient SOAP.
In summary, I tend to agree with Mark Diodati that SPML in its current form is obsolete, but the need for an interoperability standard is now greater than ever. Two lingering questions remain. Will SCIM be the standard we've been waiting for, and what is the value proposition that will compel vendors to embrace it?
I agree with many of the points Mark raised. Yes, SPML is too complex in its current incarnation. In trying to offer a broad range of functionality and account for every conceivable provisioning scenario, it often ends up accomplishing quite the opposite. Accordingly, SPML implementations tend to be extremely product specific, which defeats the entire purpose of a common standard in the first place. He is also correct that SPML implementations often cause performance issues due to the burden they place on connectors, although I would argue that this is less attributable to the standard itself than to how SPML has been implemented by provisioning vendors. And I completely agree with his assertion that enhancing the standard with a common inetorgperson-like user schema such as "SPMLPerson" would not only promote greater convergence with SAML, but would lead to increased adoption in the enterprise.
With that said, SPML works extremely well when implementations avoid connector-heavy and product-specific customizations (I'll come back to that shortly). In my opinion, the lack of SPML adoption isn't just because the standard itself is deficient, but because nobody has quite figured out the best way to implement SPML interfaces.
The most common use case I've encountered for SPML is when a customer wants to integrate their own proprietary UI with a commercial provisioning system. For instance, one of my customers has a sophisticated, AJAX-enabled access request portal built in .NET. It provides all the workflow functionality that one would expect to find in a commercial provisioning system, but was designed simply to manage requests and approvals rather than to automate provisioning. The customer had acquired Sun Identity Manager for provisioning automation, but didn't want to replace their elaborate, business friendly .NET interface with SIM's antiquated UI (and who can blame them?) In this particular instance, SPML was the obvious solution. The .NET application was modified to simply fire off SPML requests to SIM on the backend, thus automating provisioning actions while protecting end users from any visible impact.
In scenarios such as this, where a homegrown access request portal needs to integrate with a provisioning system, SPML works like a charm. But let's face it, this scenario doesn't require SPML specifically. In the use case I just described, Sun Identity Manager could have exposed any old SOAP or REST interface, and the integration would have been exactly the same, providing that the interface exposed the requisite methods.
The true value of a standard such as SPML is realized when providing interoperability with enterprise or cloud applications, and this is where SPML has unequivocally failed to deliver.
Which brings us back to the problem with SPML implementations. Because the standard is so broad and, unlike SAML, fails to define even a minimal schema for user profiles---much less offer a reference implementation---identity vendors have tended to implement variations of SPML that are product specific and are tightly coupled with the connector layer and underlying data stores. The result is that most SPML implementations are ugly and XML-heavy, which in turn can lead to performance issues when performing large numbers of provisioning transactions.
I have yet to see an SaaS application that supports inbound SPML messages in a way that could be called "standard". Likewise, I have yet to see a user provisioning solution that supports SPML in a non-proprietary fashion. As a result, provisioning solutions for cloud applications still depend heavily on individual connectors that all implement different product APIs, which is to say that SaaS applications are currently treated no differently from any other kind of application.
So the question remains, is SPML really dead? Well, in its current incarnation, it's hard to argue that it was ever really alive to begin with. SPML 2.0 is now five years old, and the OASIS Provisioning Services Technical Committee hasn't convened since 2008. But the critical need for interoperable provisioning is now greater than ever, particularly considering the explosive adoption of cloud applications in recent years. The fact that most identity management RFPs include a requirement for SPML confirms that standards-based integration remains a critical priority for organizations. Even as IAM maturation drives us away from traditional user-centric IAM towards more business-centric models, identity management projects still spend far too much time focusing on provisioning, and specifically on the nuances of proprietary connector architectures; this is another argument for a widely adopted provisioning standard that abstracts the complexity of underlying systems and data stores.
If SPML is to survive, then it will have to be reincarnated without any consideration to backwards compatibility. For that to happen, a major player in the identity space (I'm talking about the likes of Oracle, CA or IBM here) needs to step up, take leadership and create some momentum for a new standard.
In the meantime, there are several proposals out there to compensate for the lack of momentum surrounding SPML and the critical needs that it was intended to address. Jim McGovern, never a shrinking violet, has a wish list for what he would expect to see from SPML 3.0, if it ever happens. In 2009, the visionary Nishant Kaushik proposed using OAuth to support federated provisioning needs. The SimpleCloud (SCIM) initiative----which enjoys participation from various cloud providers such as Google, Salesforce as well as identity vendors such as UnboundID and SailPoint----appears to have embraced this concept and taken it to another level. For my money, SCIM is the most promising candidate for a new provisioning standard. Seemingly inspired by the LDAP standard, SCIM also leverages SAML, OAuth 2.0 bearer tokens, OpenID and JWT, while favoring a REST API over less efficient SOAP.
In summary, I tend to agree with Mark Diodati that SPML in its current form is obsolete, but the need for an interoperability standard is now greater than ever. Two lingering questions remain. Will SCIM be the standard we've been waiting for, and what is the value proposition that will compel vendors to embrace it?
Labels:
Cloud Computing,
Identity Lifecycle,
OAuth,
OpenID,
Provisioning,
SaaS,
SailPoint,
SAML,
SCIM,
SPML,
UnboundID
Friday, October 7, 2011
A Brief Recap on OpenWorld 2011
Well, my intentions were good, at least. I had intended to blog throughout the week from Oracle OpenWorld, but ended up being so busy (this is the first year that we have had a booth at OOW), I never had time to post anything. The event itself was even bigger than last year, with more than 45,000 attendees and 4,500 Oracle partners.
Although I didn't get to attend any sessions due to other commitments, I'm planning to download and watch the sessions of interest later. In the meantime, here are a few things that we learned this week:
Although I didn't get to attend any sessions due to other commitments, I'm planning to download and watch the sessions of interest later. In the meantime, here are a few things that we learned this week:
- Oracle seems committed to nurturing and expanding a flourishing partner network, and appears to have no interest in providing services.
- Larry Ellison's keynote received, let's just say, a lukewarm reception. The spat between Larry and Salesforce CEO Marc Benioff cast something of a dark shadow over proceedings, especially when the former referred to Salesforce as a "roach motel". Of course, all of this pointless bickering was put into stark perspective when it peaked on the same day that Steve Jobs passed away.
- The Public Cloud is core to Oracle's vision of providing subscription-based access to business applications, including the Fusion middleware stack.
- Oracle's first partner preview of Solaris 11 proves that my favorite *nix O/S is still alive and kicking, nearly two years after the Sun acquisition (yay!). Solaris 11 is fully virtualized, contains an impressive range of performance and functionality enhancements
- The Oracle Big Data Appliance (BDA) is a BFD!
- From an IAM perspective, not much to report, although admittedly I haven't watched the IAM sessions yet. There doesn't appear to have been much movement in this space since last year. Frankly, given Oracle's size and ambition, IAM appears to be losing some relevance for them. I hope I'm wrong about that.
There is no ambiguity about Larry's Oracle's technology vision. It can be summed up in three simple phrases: Big Data, Cloud and Parallel Processing.
Conference weeks always leave me feeling drained and needing a few days to recover. So I'm spending the weekend in Venice Beach, where I will NOT be thinking about Exadata or anything else to do with technology for the next couple of days.
Have a great weekend everyone!
Conference weeks always leave me feeling drained and needing a few days to recover. So I'm spending the weekend in Venice Beach, where I will NOT be thinking about Exadata or anything else to do with technology for the next couple of days.
Have a great weekend everyone!
Monday, October 3, 2011
New Qubera website is now online!!!
It took several months, but we finally got there just in time for OpenWorld. Check out Qubera's new website at http://www.quberasolutions.com.
Day One at OpenWorld
Not much to report so far from OpenWorld. Unfortunately, I missed Larry Ellison's keynote last night as I was exhausted after fourteen hours of traveling, helping to set up the Qubera booth and trekking up a hill with my luggage to get to our rented house in the Castro (long story). From what I understand, the keynote was somewhat underwhelming in any case. But woke up this morning refreshed and raring to go. Planning to attend three sessions today, on Oracle Identity Administration, Trends in Identity Management and Identity Administration Management in the Cloud.
Will blog more later in the day.
Will blog more later in the day.
Sunday, October 2, 2011
Visit us at Oracle OpenWorld
In a few hours, I'm heading out to San Francisco for the Oracle OpenWorld 2011 conference, arguably one of the biggest events in the I.T. calendar. Qubera Solutions will be at booth 2542 in Moscone South. All of us from the Qubera leadership team will be in town. I'm planning to be at the booth from 3-5PM on Monday, and 2-4PM on Wednesday. So if you're attending OpenWorld, please feel free to drop by and say hello. I'll be blogging from the event throughout the week.
Saturday, October 1, 2011
My Top 5 Likes/Dislikes about being an IAM Professional
Recently, a fellow I.T. industry friend of mine was asking me why I chose IAM as a specialty. It got me thinking about how I got into this field, and why I decided to make a career out of it. I'd be lying if I said it was a conscious decision, although at this point in my life, it's difficult to imagine another I.T. discipline that I would find quite as rewarding, or as challenging.
The "getting in" part is easy to explain. IAM found me back in the late 90s when I was working as an enterprise architect for Travelers Insurance. Having worked there for less than a month, I was given an assignment to create a custom Web SSO and directory framework for all of the legacy host applications that were being web-enabled at the time. Back then, there were no serious vendor offerings for Web SSO (at least none that weren't extortionately priced). Even if there were any vendor offerings worthy of consideration, we had a "build, don't buy" I.T. culture in those days. Besides, there were only about a dozen applications and a few thousand external users that would be using it, and nobody was using the term identity management yet.
Immediately, I was hooked. Within two years, what started out as a tactical framework evolved into an enterprise SSO, federated identity, user provisioning, role management and directory virtualization suite (all of which was developed in-house). A dozen applications turned into several hundred, and a few thousand users turned into a couple of million. Before I knew it, I was no longer an individual contributor but was leading a large and extremely talented group dedicated solely to identity and access management. I never anticipated my career panning out the way it did, but I'm pleased it happened, mainly because of all the wonderful experiences I've had working in IAM over the years. Not to mention that I'm still extremely proud of what we accomplished there (it wasn't until several years later that we finally began to start replacing our homegrown suite with vendor solutions).
Anyway, I've decided to put together a short list of the things I love and hate about IAM, and would be interested in hearing from fellow IAM professionals about their own lists...
Likes
The "getting in" part is easy to explain. IAM found me back in the late 90s when I was working as an enterprise architect for Travelers Insurance. Having worked there for less than a month, I was given an assignment to create a custom Web SSO and directory framework for all of the legacy host applications that were being web-enabled at the time. Back then, there were no serious vendor offerings for Web SSO (at least none that weren't extortionately priced). Even if there were any vendor offerings worthy of consideration, we had a "build, don't buy" I.T. culture in those days. Besides, there were only about a dozen applications and a few thousand external users that would be using it, and nobody was using the term identity management yet.
Immediately, I was hooked. Within two years, what started out as a tactical framework evolved into an enterprise SSO, federated identity, user provisioning, role management and directory virtualization suite (all of which was developed in-house). A dozen applications turned into several hundred, and a few thousand users turned into a couple of million. Before I knew it, I was no longer an individual contributor but was leading a large and extremely talented group dedicated solely to identity and access management. I never anticipated my career panning out the way it did, but I'm pleased it happened, mainly because of all the wonderful experiences I've had working in IAM over the years. Not to mention that I'm still extremely proud of what we accomplished there (it wasn't until several years later that we finally began to start replacing our homegrown suite with vendor solutions).
Anyway, I've decided to put together a short list of the things I love and hate about IAM, and would be interested in hearing from fellow IAM professionals about their own lists...
Likes
- Working with customers. Although I was always enthusiastic about IAM, I only discovered how passionate I was about it when I moved over to the consulting side of the industry and began working with customers to solve their business problems. Having spent so many years as a customer myself, I can empathize with the internal challenges each of them face, having been there and done that. While at Qubera, I've had the privilege of working with some wonderful and highly talented professionals at customers all over the world. That's an experience I wouldn't swap for anything.
- Technological diversity. I can't think of another I.T. discipline that provides one with exposure to such a vast range of technologies and platforms. An IAM project can touch every element of an organization's technology infrastructure, so in order to be successful, you need to demonstrate extraordinary technical breadth. In any given week, for example, I might be working on projects that involve integrating an IAM system with everything from vanilla LDAP directories to healthcare systems to mainframe applications to homegrown web portals to custom client-server apps to mobile devices.
- Solving business problems. One of the pillars of our philosophy at Qubera is that IAM is a business enabler rather than just a technology. This is one of the first things we try to impress upon our customers. The sense of accomplishment one gains from solving a complex business problem for a customer and delivering quantifiable business value is an addictive feeling. IAM is one of the few I.T. disciplines where you have the opportunity to have a demonstrably positive impact on an organization's business culture.
- Constant change. Although IAM technologies and best practices have matured dramatically over the past few years, IAM itself is still very much in its infancy (or at least in its adolescence, which makes me wonder when it will begin asking for a car). Just staying abreast of all the constantly evolving IAM standards, tools and technologies out there can be a full-time job in itself. As somebody who thrives on change, I think that the day IAM stops evolving will be the day that I get tired with it. Fortunately, there is no prospect of that happening for the foreseeable future; in fact, I believe today is the best time ever to get involved in IAM.
- No two projects are the same. Every customer likes to think that their IAM challenges are unique. At a high level, I've never found this to be the case, as the general issues faced by most organizations tend to be extremely common. However, at a more granular level, the characteristics of every IAM project are very unique indeed. Sometimes this is because of the technical landscape or some unusual business processes, and sometimes it's purely because of the personalities involved in the project. Whenever I engage with a new customer, I know for certain that I am going to be faced with new challenges and have new experiences. That keeps things exciting and ensures I stay sharp.
Dislikes
- Politics. IAM projects are notorious for being politically contentious, and at some point in every IAM project, politics will rear its ugly head. This is sometimes because of data stewardship or process ownership issues, sometimes because of turf wars, sometimes because project participants fear change, and sometimes because of internal disagreements over strategy and direction. No matter how careful you are to avoid it, political contention on an IAM project is as inevitable as the sunrise.
- IAM is unglamorous. If you're looking to make a big name for yourself in I.T., then IAM is probably the wrong field for you. It is extremely unfashionable, and because ROIs often tend to be squishy, it isn't a top budget priority for most organizations. Generally, the only time anybody will care who you are is if you fail, and then everybody will know your name. To some extent, your proficiency in IAM can be measured by your level of anonymity. Fortunately for me, I never wanted to be famous, otherwise I would have chosen to become a movie star instead (/snark).
- Delivering bad news. Just like doctors have to learn how to deliver bad news to patients, IAM professionals are constantly having to deliver bad news to executives. Usually, this is because a customer's security exposures turn out to be far more significant and costly than they had believed. Sometimes, it's because their project objectives and timelines are horribly unrealistic. And occasionally, it's because they lack the internal staffing to support an enterprise IAM solution, never mind implement it to begin with. Worst of all is when a customer engages you for a clean-up project after another consulting company has messed up an IAM implementation, and you have to deliver the bad news that the implementation is so bad that you need to rip down everything that your predecessor has done and start again.
- Nobody takes IAM seriously until they suffer a serious breach. This one speaks for itself. We've all been there.
- All the misunderstandings surrounding what IAM "is" and "isn't". No, it isn't a "product" that simply needs to be installed. No, it isn't just about provisioning users, standing up an LDAP directory or synchronizing passwords. No, it isn't an operational back-office function. And.... well, I'm sure you get my meaning.
Decomposing Identity Management Approval Workflows
One of the hallmarks of a poorly conceived identity management solution is a large number of workflows, particularly approval workflows. In one extreme case with which I'm familiar, a large financial services company has a Sun Identity Manager implementation that contains hundreds of individual approval workflows—one workflow for every entitlement, application or role that a user can request. They have a team of IDM engineers whose function is to build new workflows every time a new application needs to be integrated with the identity system. In most cases, this task involves cutting and pasting the XML from other similar workflows and then hardwiring them with application-specific metadata that is provided by a business analyst. Needless to say, this company eventually reached a point where the identity management system began to experience crippling performance issues.
When designing self-service request/approval workflows, it is important not to think in terms of individual applications but in terms of assets and generic approval patterns. An asset can be anything that might be requested; an application, entitlement, role or even an out-of-band asset such as a laptop or cellphone. Each asset, in turn, is logically mapped to an approval pattern. Even in the largest, most complex organizations, the number of approval patterns tends to be extremely small; in most cases no more than five to ten. In fact, one tends to find that most approval patterns are variations of the following:
Let's examine the three major processes illustrated above.
This approach allows for the design of highly generic, dynamic and reusable workflow templates, which in turn facilitates the rapid integration of new assets for self-service access requests. The loosely-coupled nature of a pattern-based workflow architecture is also consistent with the governance-based implementation strategy I've described in previous posts, which is based on decoupling automated provisioning from account discovery and asset definition. Using metadata to drive execution of a provisioning action should in most cases allow administrators to "flick a switch" simply by updating the metadata itself once they are ready to turn on automated provisioning for a particular asset.
Of course, the obvious question is what does it take from a technical perspective to build an asset integration framework like this. Obviously, the specific implementation depends very much on your unique business requirements and the capabilities of the identity management system you are using, which is why I've attempted to keep this overview product-agnostic. With that said, this approach is certainly compatible with solutions such as OIM, SIM/Oracle Waveset, Novell IdM, SailPoint IdentityIQ and Quest One Identity Manager.
All of which also raises another advantage of pattern-based workflows. If at any point in the future, you plan to migrate to another identity solution, it is much easier to migrate a tiny handful of workflows than it is to migrate several thousand.
When designing self-service request/approval workflows, it is important not to think in terms of individual applications but in terms of assets and generic approval patterns. An asset can be anything that might be requested; an application, entitlement, role or even an out-of-band asset such as a laptop or cellphone. Each asset, in turn, is logically mapped to an approval pattern. Even in the largest, most complex organizations, the number of approval patterns tends to be extremely small; in most cases no more than five to ten. In fact, one tends to find that most approval patterns are variations of the following:
- Manager-Only Approval
- Asynchronous Multi-Level Approval (all approvers must complete in order)
- Synchronous Partial Approval ('x' of 'y' approvers must complete)
- Synchronous Full Approval (all approvers must complete in any order)
Figure 1: Decomposition of an IDM Approval Workflow (Click for full-size version) |
Request Process: This handles an asset request from an end user or an approved designate. Optionally, this process may include generation of a request form, which collects additional, asset-specific information from the requester that is required to support the request (for example, a business case for requesting the asset).
Approval Process: Once the request has been submitted, an approval workflow is invoked. An asset-to-pattern mapping determines the particular approval pattern to invoke. Optionally, approvers may be required to complete an approval form, in which they will enter additional information that is necessary to provision the asset.
Execution Process: After all requisite approvals have been completed, the asset needs to be provisioned. This can be done automatically using a provisioning connector, or by generating a ticket to a human provisioner.All three processes require metadata about the asset. Depending on the capabilities of the identity management system, metadata can be stored in the system itself, or even externalized in an RDBMS system or LDAP directory. Among other elements, metadata may include the fields to render on asset-specific request/approval forms, email templates, escalation settings, delinquency timeouts, instructions for the execution process on what to do when all approvals have been completed, and of course the type of approval workflow pattern to invoke.
This approach allows for the design of highly generic, dynamic and reusable workflow templates, which in turn facilitates the rapid integration of new assets for self-service access requests. The loosely-coupled nature of a pattern-based workflow architecture is also consistent with the governance-based implementation strategy I've described in previous posts, which is based on decoupling automated provisioning from account discovery and asset definition. Using metadata to drive execution of a provisioning action should in most cases allow administrators to "flick a switch" simply by updating the metadata itself once they are ready to turn on automated provisioning for a particular asset.
Of course, the obvious question is what does it take from a technical perspective to build an asset integration framework like this. Obviously, the specific implementation depends very much on your unique business requirements and the capabilities of the identity management system you are using, which is why I've attempted to keep this overview product-agnostic. With that said, this approach is certainly compatible with solutions such as OIM, SIM/Oracle Waveset, Novell IdM, SailPoint IdentityIQ and Quest One Identity Manager.
All of which also raises another advantage of pattern-based workflows. If at any point in the future, you plan to migrate to another identity solution, it is much easier to migrate a tiny handful of workflows than it is to migrate several thousand.
Do I really need the Amazon Kindle Fire?
Okay, I admit it. I'm as much of a geek as the next person, so when I see something cool and new like the Amazon Kindle Fire tablet that was unveiled earlier this week, my first instinct is to say "I want one!" There's no rationale behind it, no logic, just the primal instincts of a geek being drawn to a new bright, shiny object.
But now that the initial excitement has worn off, I have to say that I'm underwhelmed by the specs. No MicroSD slot, a paltry 8Gb RAM, no camera, no mic, no Bluetooth or HDMI support, and no 3G (at least not in the first release). An iPad killer this is not, although I was pleasantly surprised that it will include a micro-USB 2.0 port. Sure, the price point is compelling, and you can't really expect too much for $199, but that still begs the question, what is Amazon's target audience?
Perhaps the Fire is the tablet counterpart to the Google Chromebook. The hardware specs don't need to be impressive, because it is assumed that users will store all of their content in the cloud, reducing the need for local storage. In other words, the Fire is being aimed at a different audience than the iPad, which is designed to be as much of a productivity tool as a media client.
The range of apps available through the Amazon App Store is paltry in comparison to Apple, and Apple is likely to retain a huge advantage in this area for the foreseeable future. Interestingly, the Fire will not be able to access Android Market, although I suspect some enterprising hacker will figure out a jailbreak for that restriction within days of the Fire's release. As with Apple, the Amazon App Store is a walled garden, in which all apps undergo an approval process.
But I suspect that Apple isn't really Amazon's real target here. From a technical standpoint, the Fire doesn't appear to have been conceived as an iPad killer, and there are now so many devices competing in the high-end tablet space where the iPad reigns supreme that it doesn't make sense to add another one to the mix. Besides, Amazon clearly isn't concerned about making a profit on sales of Kindle Fire tablets. The device itself is a loss-leader for them, as the retail price is actually lower than the manufacturing cost, according to this analysis. The real long-term strategy behind the Kindle Fire is to draw new customers into Amazon's media ecosystem and make them dependent upon it. Anybody who purchases a Fire will receive a free one-month membership for Amazon Prime, which provides unlimited two-day shipping and, more importantly, access to a growing library of streaming movies and shows. As Business Week points out:
But none of this answers the question of whether or not I need one. I already own a first version iPad, which I use mainly for taking notes in meetings, watching movies on planes and as an RDP client for my home network. With the addition of the awesome Clamcase keyboard, my iPad doubles as an extremely effective netbook when I need it to. So when the iPad 2 was released last year, I decided to give it a miss, as it didn't offer any compelling new features that justified an upgrade. Instead, I figured I would wait another year to see what the iPad 3 had to offer, and that is still the plan.
With that said, there is still something.... I don't know.... compelling about the Kindle Fire. I'm a big fan of the 7" form factor for reading e-books, watching movies and browsing web content, and the addition of a MicroUSB port that supports the addition of an external hard drive is a killer feature for a movie buff like me. As a consultant, I spend a lot of time in hotels and on airplanes, where WiFi speeds are often insufficient for streaming media from the cloud, so the ability to carry around my media on a flash drive would be huge (the failure of the iPad2 to provide USB support was one of the major factors in my decision not to buy one). Even so, this one feature alone is not sufficient justification for replacing my iPad.
If I didn't already have an iPad, the Kindle Fire might be a no-brainer for me, but I certainly don't need two tablets. It would be like having both a notebook and a Chromebook, which I just can't see the need for. Still, the Kindle Fire will certainly be a huge success without people like me buying one. If you don't already have a tablet, you can't really go wrong with a $199 device that provides you with access to pretty much all the media content you would ever need.
UPDATE: My colleague Chris over at Technologese adds some great technical insights into Amazon's strategy, and as usual, is right on the money.
But now that the initial excitement has worn off, I have to say that I'm underwhelmed by the specs. No MicroSD slot, a paltry 8Gb RAM, no camera, no mic, no Bluetooth or HDMI support, and no 3G (at least not in the first release). An iPad killer this is not, although I was pleasantly surprised that it will include a micro-USB 2.0 port. Sure, the price point is compelling, and you can't really expect too much for $199, but that still begs the question, what is Amazon's target audience?
Perhaps the Fire is the tablet counterpart to the Google Chromebook. The hardware specs don't need to be impressive, because it is assumed that users will store all of their content in the cloud, reducing the need for local storage. In other words, the Fire is being aimed at a different audience than the iPad, which is designed to be as much of a productivity tool as a media client.
The range of apps available through the Amazon App Store is paltry in comparison to Apple, and Apple is likely to retain a huge advantage in this area for the foreseeable future. Interestingly, the Fire will not be able to access Android Market, although I suspect some enterprising hacker will figure out a jailbreak for that restriction within days of the Fire's release. As with Apple, the Amazon App Store is a walled garden, in which all apps undergo an approval process.
But I suspect that Apple isn't really Amazon's real target here. From a technical standpoint, the Fire doesn't appear to have been conceived as an iPad killer, and there are now so many devices competing in the high-end tablet space where the iPad reigns supreme that it doesn't make sense to add another one to the mix. Besides, Amazon clearly isn't concerned about making a profit on sales of Kindle Fire tablets. The device itself is a loss-leader for them, as the retail price is actually lower than the manufacturing cost, according to this analysis. The real long-term strategy behind the Kindle Fire is to draw new customers into Amazon's media ecosystem and make them dependent upon it. Anybody who purchases a Fire will receive a free one-month membership for Amazon Prime, which provides unlimited two-day shipping and, more importantly, access to a growing library of streaming movies and shows. As Business Week points out:
Amazon Prime may be the most ingenious and effective customer loyalty program in all of e-commerce, if not retail in general. It converts casual shoppers...into Amazon addicts. Analysts describe Prime as one of the main factors driving Amazon's stock price—up 296 percent in the last two years—and the main reason Amazon's sales grew 30 percent during the recession while other retailers flailed. At the same time, Prime has proven exceedingly difficult for rivals to copy: It allows Amazon to exploit its wide selection, low prices, network of third-party merchants, and finely tuned distribution system, while also keying off that faintly irrational human need to maximize the benefits of a club you have already paid to join.If anybody should be nervous about the Kindle Fire, it should be Netflix and B&N, not Apple. The iPad dominates the higher end of the tablet market, and will likely continue to do so for the foreseeable future. The lower end of the market, however, is still up for grabs, and that is where the Kindle Fire is likely to dominate. Backed by Amazon's leviathan marketing machine, considerable resources and a vast media ecosystem, the Fire is likely to become yet another huge success for Amazon.
But none of this answers the question of whether or not I need one. I already own a first version iPad, which I use mainly for taking notes in meetings, watching movies on planes and as an RDP client for my home network. With the addition of the awesome Clamcase keyboard, my iPad doubles as an extremely effective netbook when I need it to. So when the iPad 2 was released last year, I decided to give it a miss, as it didn't offer any compelling new features that justified an upgrade. Instead, I figured I would wait another year to see what the iPad 3 had to offer, and that is still the plan.
With that said, there is still something.... I don't know.... compelling about the Kindle Fire. I'm a big fan of the 7" form factor for reading e-books, watching movies and browsing web content, and the addition of a MicroUSB port that supports the addition of an external hard drive is a killer feature for a movie buff like me. As a consultant, I spend a lot of time in hotels and on airplanes, where WiFi speeds are often insufficient for streaming media from the cloud, so the ability to carry around my media on a flash drive would be huge (the failure of the iPad2 to provide USB support was one of the major factors in my decision not to buy one). Even so, this one feature alone is not sufficient justification for replacing my iPad.
If I didn't already have an iPad, the Kindle Fire might be a no-brainer for me, but I certainly don't need two tablets. It would be like having both a notebook and a Chromebook, which I just can't see the need for. Still, the Kindle Fire will certainly be a huge success without people like me buying one. If you don't already have a tablet, you can't really go wrong with a $199 device that provides you with access to pretty much all the media content you would ever need.
UPDATE: My colleague Chris over at Technologese adds some great technical insights into Amazon's strategy, and as usual, is right on the money.
Labels:
Amazon,
Apple,
gadgets,
Google,
iPad,
Kindle Fire,
tablets,
technology
Friday, September 30, 2011
The Dirty Secret of Identity Management - Part 2
Yesterday, I wrote about an alternative implementation strategy for identity management projects that maximizes the likelihood of success by emphasizing access governance rather than provisioning. Today, I'd like to explore this concept in greater detail.
Consider the classic approach to a provisioning-centric identity management project. A first phase may involve standing up the foundational infrastructure and seeding users from an authoritative source such as an HR system. A second phase performs automated provisioning of these users into Active Directory. A third phase extends this same functionality to an enterprise directory, and so on, and so on.
This approach encourages the notion of vertical identity management silos, each of which has to be conquered independently of the others, as illustrated here.
Within each of these silos, several things need to happen before automated provisioning becomes possible. Prerequisites typically include account discovery and correlation, data analysis and cleansing, and business process architecture. Accordingly, it can take a large enterprise several months to implement user provisioning for even a single resource. When an organization has potentially hundreds of systems that need to be managed, it becomes obvious why the failure rate for provisioning projects is so high.
Intuitively, one might think that after user provisioning has been implemented for the first resource, subsequent phases should become easier. After all, by this point the internal team is more comfortable with the technology and has gained valuable experience. Unfortunately, the opposite tends to happen. The problem with the siloed approach is that it precludes a holistic view of identity. It is therefore common for IAM architects to make false assumptions in early phases about how identity data will be persisted throughout the organization, only to have those assumptions shattered later on when attempting to manage a resource that has unusual (and previously undocumented) requirements. This inevitably requires rework and regression testing of processes that have already been implemented. I have yet to see an identity project where this didn't happen.
In summary, the classic silo approach so common in user provisioning projects is a recipe for budget overruns, extended timelines, complex configurations, inconsistent business logic and worst of all, failure to extend identity management to all critical systems.
A Governance-Centric Approach
Given the relatively low ROI and the high failure rate for provisioning-centric IAM projects, it is clear that a different approach is necessary. This is not to say that automated provisioning is unnecessary or even undesirable, but it should be viewed as just one piece of the identity management puzzle.
A governance-centric implementation strategy takes a horizontal, and therefore a more holistic approach to identity management, as illustrated here:
Compare this to the traditional approach illustrated in Figure 1, and it should all begin to make sense. Instead of aligning project phases to vertical identity silos, the governance-based approach introduces the various functions of an identity management solution in a horizontal manner.
Phase one is now focused on the seeding, discovery and correlation of accounts for all relevant systems. Obviously, there is still a requirement to stand up the base infrastructure and seed user identities from an authoritative source, but beyond that, the goal of phase one is to build an aggregate view of identity data, providing visibility into who has access to what. This is achieved by implementing read-only connectors that go out to each resource, discover accounts and correlate them to identities where possible. There is no end-user functionality, no provisioning and thus no risk to the integrity of existing processes or data.
With this approach, it should be possible in most cases to create an aggregate view of access privileges within days or weeks, depending on how many resources are being queried. The discovery and correlation process for each resource will immediately be able to identify any orphaned or rogue accounts, in addition to data inconsistencies. These can be flagged for remediation as part of the data clean-up exercise that is essential to the success of subsequent project phases.
Once the aggregate view has been created, it immediately becomes possible to centralize and automate access recertification requests and compliance reports, perform role mining and analytics, and even to start introducing detective policy enforcement at the enterprise level. By any standard, these are all significant benefits for a phase one release. Furthermore, a horizontal approach promotes a more holistic view of identity, which helps to prevent the proliferation of resource-specific policies, processes and configurations. From the beginning, policies are implemented globally.
By the end of phase one, connectors are live for all critical target resources and identity data has been validated and cleansed. It therefore becomes possible to approach subsequent phases with a much higher degree of confidence and certainty than it would with a "one resource at a time" approach.
Phase two involves layering repeatable, generic business processes on top of what has been built thus far. Such workflows will typically encapsulate identity lifecycle events (new hire, transfer, terminate, etc...) and access request/approval processes. Again, the goal here is to define processes that are generic and reusable enough that they can be implemented globally. Business process workflows should be specialized only by the metadata with which they are populated at runtime. For example, consider an access request/approval pattern. In any organization, there are normally no more than a tiny handful of "types" of approval pattern (i.e. Manager Only, Synchronous, Asyncronous, and so on...) While each resource may require the requester or approver to provide resource-specific data, even the forms used to collect that data can be dynamically created at runtime by passing arguments into a workflow.
Once business processes have been defined as workflows, end users, managers and designated approvers can begin to use the identity management system to initiate and approve access requests. On the back-end of each process, a ticket or email can be generated to the "provisioner", who simply has to grant the desired privilege. Keeping human provisioners engaged is a useful mechanism for heading off any operational issues that may result from the introduction of new business processes. Based on their feedback, processes can be further refined to satisfy business requirements without impacting the data on any resource.
By the end of phase two, we have seeded all our accounts, deployed connectors into all target systems, created a global reporting capability, defined a unified view of access privileges and introduced centralized/streamlined business processes. Again, the benefits from phase two are significant; for one thing, centralized business processes offer greater enhanced auditing capabilities, but just as important, it begins to establish identity management as part of the corporate culture.
Which brings us to phase three. Now that the data is clean and new business processes have been introduced, existing read-only connectors can simply be write-enabled in order to start provisioning. This should be done on a resource-by-resource basis. But without the overhead of reconciliation, correlation and data cleansing for each resource, that should be a relatively trivial exercise compared to the effort of deploying provisioning connectors using the classic project approach. If workflows have been designed correctly (naturally, this depends to some extent on the identity product being used), making the switch from manual to "last mile" user provisioning should involve little more than a configuration change.
So there you have it; a holistic, governance-based strategy for implementation of an Identity Management solution. As I noted in my previous comments on this topic, the governance-centric approach is not dependent on any particular brand of identity management tool, as all of the major commercial offerings have the ability to deploy read-only connectors, correlate and define loosely-coupled workflows. In fact, the only obstacle is the willingness of IAM practitioners to think differently.
Consider the classic approach to a provisioning-centric identity management project. A first phase may involve standing up the foundational infrastructure and seeding users from an authoritative source such as an HR system. A second phase performs automated provisioning of these users into Active Directory. A third phase extends this same functionality to an enterprise directory, and so on, and so on.
This approach encourages the notion of vertical identity management silos, each of which has to be conquered independently of the others, as illustrated here.
Figure 1: Classic Provisioning Approach |
Within each of these silos, several things need to happen before automated provisioning becomes possible. Prerequisites typically include account discovery and correlation, data analysis and cleansing, and business process architecture. Accordingly, it can take a large enterprise several months to implement user provisioning for even a single resource. When an organization has potentially hundreds of systems that need to be managed, it becomes obvious why the failure rate for provisioning projects is so high.
Intuitively, one might think that after user provisioning has been implemented for the first resource, subsequent phases should become easier. After all, by this point the internal team is more comfortable with the technology and has gained valuable experience. Unfortunately, the opposite tends to happen. The problem with the siloed approach is that it precludes a holistic view of identity. It is therefore common for IAM architects to make false assumptions in early phases about how identity data will be persisted throughout the organization, only to have those assumptions shattered later on when attempting to manage a resource that has unusual (and previously undocumented) requirements. This inevitably requires rework and regression testing of processes that have already been implemented. I have yet to see an identity project where this didn't happen.
In summary, the classic silo approach so common in user provisioning projects is a recipe for budget overruns, extended timelines, complex configurations, inconsistent business logic and worst of all, failure to extend identity management to all critical systems.
A Governance-Centric Approach
Given the relatively low ROI and the high failure rate for provisioning-centric IAM projects, it is clear that a different approach is necessary. This is not to say that automated provisioning is unnecessary or even undesirable, but it should be viewed as just one piece of the identity management puzzle.
A governance-centric implementation strategy takes a horizontal, and therefore a more holistic approach to identity management, as illustrated here:
Figure 2: Governance-Centric Identity Management |
Phase one is now focused on the seeding, discovery and correlation of accounts for all relevant systems. Obviously, there is still a requirement to stand up the base infrastructure and seed user identities from an authoritative source, but beyond that, the goal of phase one is to build an aggregate view of identity data, providing visibility into who has access to what. This is achieved by implementing read-only connectors that go out to each resource, discover accounts and correlate them to identities where possible. There is no end-user functionality, no provisioning and thus no risk to the integrity of existing processes or data.
With this approach, it should be possible in most cases to create an aggregate view of access privileges within days or weeks, depending on how many resources are being queried. The discovery and correlation process for each resource will immediately be able to identify any orphaned or rogue accounts, in addition to data inconsistencies. These can be flagged for remediation as part of the data clean-up exercise that is essential to the success of subsequent project phases.
Once the aggregate view has been created, it immediately becomes possible to centralize and automate access recertification requests and compliance reports, perform role mining and analytics, and even to start introducing detective policy enforcement at the enterprise level. By any standard, these are all significant benefits for a phase one release. Furthermore, a horizontal approach promotes a more holistic view of identity, which helps to prevent the proliferation of resource-specific policies, processes and configurations. From the beginning, policies are implemented globally.
By the end of phase one, connectors are live for all critical target resources and identity data has been validated and cleansed. It therefore becomes possible to approach subsequent phases with a much higher degree of confidence and certainty than it would with a "one resource at a time" approach.
Phase two involves layering repeatable, generic business processes on top of what has been built thus far. Such workflows will typically encapsulate identity lifecycle events (new hire, transfer, terminate, etc...) and access request/approval processes. Again, the goal here is to define processes that are generic and reusable enough that they can be implemented globally. Business process workflows should be specialized only by the metadata with which they are populated at runtime. For example, consider an access request/approval pattern. In any organization, there are normally no more than a tiny handful of "types" of approval pattern (i.e. Manager Only, Synchronous, Asyncronous, and so on...) While each resource may require the requester or approver to provide resource-specific data, even the forms used to collect that data can be dynamically created at runtime by passing arguments into a workflow.
Once business processes have been defined as workflows, end users, managers and designated approvers can begin to use the identity management system to initiate and approve access requests. On the back-end of each process, a ticket or email can be generated to the "provisioner", who simply has to grant the desired privilege. Keeping human provisioners engaged is a useful mechanism for heading off any operational issues that may result from the introduction of new business processes. Based on their feedback, processes can be further refined to satisfy business requirements without impacting the data on any resource.
By the end of phase two, we have seeded all our accounts, deployed connectors into all target systems, created a global reporting capability, defined a unified view of access privileges and introduced centralized/streamlined business processes. Again, the benefits from phase two are significant; for one thing, centralized business processes offer greater enhanced auditing capabilities, but just as important, it begins to establish identity management as part of the corporate culture.
Which brings us to phase three. Now that the data is clean and new business processes have been introduced, existing read-only connectors can simply be write-enabled in order to start provisioning. This should be done on a resource-by-resource basis. But without the overhead of reconciliation, correlation and data cleansing for each resource, that should be a relatively trivial exercise compared to the effort of deploying provisioning connectors using the classic project approach. If workflows have been designed correctly (naturally, this depends to some extent on the identity product being used), making the switch from manual to "last mile" user provisioning should involve little more than a configuration change.
So there you have it; a holistic, governance-based strategy for implementation of an Identity Management solution. As I noted in my previous comments on this topic, the governance-centric approach is not dependent on any particular brand of identity management tool, as all of the major commercial offerings have the ability to deploy read-only connectors, correlate and define loosely-coupled workflows. In fact, the only obstacle is the willingness of IAM practitioners to think differently.
Thursday, September 29, 2011
The Evolution of IAM: From Gate Keeping to Corporate Governance
Back when I was a young ankle biter taking my first tentative steps into the world of Identity and Access Management, it didn't even have a name. Admittedly, that was less than 15 years ago, but given how much IAM has matured in that time, it seems like much longer.
In those days, core IAM services such as credential management, entitlements management, user provisioning, access certification and directory integration were viewed very much as low-level, highly tactical I.T. functions. Use cases were generally simple in nature ("I need an account on System A, so I'll call my friend Mary on the helpdesk to give me an ID and password that I'll never need to change"). Regulatory mandates were less stringent, identity theft was less widespread, and very few enterprise applications were web-enabled. In fact, one of my very first identity projects in the 1990s involved devising a Web SSO solution for legacy host applications that were in the process of being "webified" (believe me, you don't want to know how I got there, but let's just say that it involved some pretty creative CGI scripting).
The first generation of identity services (I'll broadly categorize them under the heading IAM 1.0) focused primarily on basic credential management that emphasized the need to prevent unauthorized users from accessing protected information assets but added very little business value. Typically, user accounts were provisioned by standing up a directory and writing some scripts to provide basic automation. It was not uncommon for organizations to stand up unique directories for every application. In those Wild West days, I.T. departments generally did whatever they wanted, with little regard for governance or operational efficiency. Predictably, this led to the proliferation of directories and passwords, not to mention rogue accounts. To compound matters, entitlements were assigned in a cumulative fashion; very few organizations had developed the processes or tools that allowed for revocation of entitlements upon a job change.
Several vendors released tools that aimed to help organizations deal with these challenges. IAM 1.0 products were largely pure-play offerings; for example, metadirectories such as Microsoft MIIS and IBM Tivoli Directory Integrator, and web single sign-on suites such as Netegrity SiteMinder and Oblix CoreID. Yet few if any vendors offered any coherent vision for IAM, since it was still viewed very much as a back office I.T. function rather than a business enabler. Best practices were virtually non-existent, and standards such as SAML, SPML and XACML were still several years from adoption. Hence, IAM 1.0 offerings tended to adopt a highly operational focus and didn't always play nicely with other tools in the enterprise.
We can date the birth of IAM 2.0 to about the mid-2000s. By that time, it had become apparent to many organizations that manual user provisioning processes were incurring significant costs in terms of operational overhead and lost productivity. Meanwhile, regulatory mandates were becoming increasingly stringent, placing additional burden on I.T. departments. This led to the emergence of complex identity suites such as Thor Xellerate (later Oracle Identity Manager) and Waveset Lighthouse (Sun Identity Manager) that emphasized user provisioning automation but were frequently expensive to implement, difficult to maintain, and often failed to deliver promised benefits due to the continued focus on IAM as an I.T. "tool" rather than as a corporate governance asset.
This almost singular focus on user provisioning, as I noted yesterday, resulted in IAM implementations that frequently ran over budget and were in many cases poorly conceived due to inadequate governance and the lack of widely accepted best practices. The TCO of these solutions was further inflated by the extensive training and highly specialized skills that were required to implement and maintain them. There was a widespread misconception—encouraged in no small part by product vendors themselves—that an identity management suite was a silver bullet that would solve world hunger; all you had to do was install it (if only that were true).
To this day, many organizations still bear the scars from IAM 2.0 projects that were expensive failures, so it isn't unsurprising that the notion of "identity management" is still treated with derision in some I.T. departments.
Nevertheless, we've come a long way in the past few years. The lessons learned from those early failures have informed a set of widely accepted best practices, and identity management offerings are beginning to reflect this maturation. Accordingly, the percentage of unsuccessful IAM projects has fallen dramatically in recent years.
Meanwhile, the notion of identity management itself has evolved out of the I.T. back office and into the boardroom. This is in no small part due to an explosion in the number of high profile and expensive breaches (like this one, and this one) resulting from inadequate controls. The enterprise also has to contend with the proliferation of cloud computing, mobile devices, the increasing use of external consultants, and remote workforces. All of these factors increase the urgency for organizations to embrace a holistic IAM strategy.
IAM 3.0 reflects both the maturation that comes with experience and the evolving demands of a technology landscape that has experienced radical change over the past several years. In fact, the term "identity management" is itself becoming somewhat anachronistic, as it no longer truly reflects the challenges with which organizations are faced. Identity governance is a far more appropriate term, and next generation IAM offerings such as SailPoint IdentityIQ increasingly reflect this paradigm shift. Such offerings place less emphasis on traditional IAM functions such as user provisioning and credential management, but focus more on GRC, reporting, identity analytics and centralized policy enforcement. From a technology perspective, customers are increasingly demanding streamlined, scalable solutions that emphasize ease of implementation, particularly in an era of constrained budgets. The days of vast, intrusive and complex identity suites are over.
Identity federation, directory virtualization, RBAC, ABAC, contextual authorization and next-generation entitlements management all form part of the IAM 3.0 picture. Automated user provisioning is still important, but is no longer the dominant consideration it once was. In order to become more business relevant and thus more achievable, provisioning needs to be considered in the context of a holistic identity lifecycle, which requires IAM practitioners to adopt a change of mindset.
Personally, I'm excited by the changes we're seeing in the IAM landscape right now. Many of them are long overdue, but change is always painful and I suspect that there will be many in the community who cling onto the old way of doing things. That is unfortunate, but an inevitable fact of life in this business.
In those days, core IAM services such as credential management, entitlements management, user provisioning, access certification and directory integration were viewed very much as low-level, highly tactical I.T. functions. Use cases were generally simple in nature ("I need an account on System A, so I'll call my friend Mary on the helpdesk to give me an ID and password that I'll never need to change"). Regulatory mandates were less stringent, identity theft was less widespread, and very few enterprise applications were web-enabled. In fact, one of my very first identity projects in the 1990s involved devising a Web SSO solution for legacy host applications that were in the process of being "webified" (believe me, you don't want to know how I got there, but let's just say that it involved some pretty creative CGI scripting).
The first generation of identity services (I'll broadly categorize them under the heading IAM 1.0) focused primarily on basic credential management that emphasized the need to prevent unauthorized users from accessing protected information assets but added very little business value. Typically, user accounts were provisioned by standing up a directory and writing some scripts to provide basic automation. It was not uncommon for organizations to stand up unique directories for every application. In those Wild West days, I.T. departments generally did whatever they wanted, with little regard for governance or operational efficiency. Predictably, this led to the proliferation of directories and passwords, not to mention rogue accounts. To compound matters, entitlements were assigned in a cumulative fashion; very few organizations had developed the processes or tools that allowed for revocation of entitlements upon a job change.
Several vendors released tools that aimed to help organizations deal with these challenges. IAM 1.0 products were largely pure-play offerings; for example, metadirectories such as Microsoft MIIS and IBM Tivoli Directory Integrator, and web single sign-on suites such as Netegrity SiteMinder and Oblix CoreID. Yet few if any vendors offered any coherent vision for IAM, since it was still viewed very much as a back office I.T. function rather than a business enabler. Best practices were virtually non-existent, and standards such as SAML, SPML and XACML were still several years from adoption. Hence, IAM 1.0 offerings tended to adopt a highly operational focus and didn't always play nicely with other tools in the enterprise.
We can date the birth of IAM 2.0 to about the mid-2000s. By that time, it had become apparent to many organizations that manual user provisioning processes were incurring significant costs in terms of operational overhead and lost productivity. Meanwhile, regulatory mandates were becoming increasingly stringent, placing additional burden on I.T. departments. This led to the emergence of complex identity suites such as Thor Xellerate (later Oracle Identity Manager) and Waveset Lighthouse (Sun Identity Manager) that emphasized user provisioning automation but were frequently expensive to implement, difficult to maintain, and often failed to deliver promised benefits due to the continued focus on IAM as an I.T. "tool" rather than as a corporate governance asset.
This almost singular focus on user provisioning, as I noted yesterday, resulted in IAM implementations that frequently ran over budget and were in many cases poorly conceived due to inadequate governance and the lack of widely accepted best practices. The TCO of these solutions was further inflated by the extensive training and highly specialized skills that were required to implement and maintain them. There was a widespread misconception—encouraged in no small part by product vendors themselves—that an identity management suite was a silver bullet that would solve world hunger; all you had to do was install it (if only that were true).
To this day, many organizations still bear the scars from IAM 2.0 projects that were expensive failures, so it isn't unsurprising that the notion of "identity management" is still treated with derision in some I.T. departments.
Nevertheless, we've come a long way in the past few years. The lessons learned from those early failures have informed a set of widely accepted best practices, and identity management offerings are beginning to reflect this maturation. Accordingly, the percentage of unsuccessful IAM projects has fallen dramatically in recent years.
Meanwhile, the notion of identity management itself has evolved out of the I.T. back office and into the boardroom. This is in no small part due to an explosion in the number of high profile and expensive breaches (like this one, and this one) resulting from inadequate controls. The enterprise also has to contend with the proliferation of cloud computing, mobile devices, the increasing use of external consultants, and remote workforces. All of these factors increase the urgency for organizations to embrace a holistic IAM strategy.
IAM 3.0 reflects both the maturation that comes with experience and the evolving demands of a technology landscape that has experienced radical change over the past several years. In fact, the term "identity management" is itself becoming somewhat anachronistic, as it no longer truly reflects the challenges with which organizations are faced. Identity governance is a far more appropriate term, and next generation IAM offerings such as SailPoint IdentityIQ increasingly reflect this paradigm shift. Such offerings place less emphasis on traditional IAM functions such as user provisioning and credential management, but focus more on GRC, reporting, identity analytics and centralized policy enforcement. From a technology perspective, customers are increasingly demanding streamlined, scalable solutions that emphasize ease of implementation, particularly in an era of constrained budgets. The days of vast, intrusive and complex identity suites are over.
Identity federation, directory virtualization, RBAC, ABAC, contextual authorization and next-generation entitlements management all form part of the IAM 3.0 picture. Automated user provisioning is still important, but is no longer the dominant consideration it once was. In order to become more business relevant and thus more achievable, provisioning needs to be considered in the context of a holistic identity lifecycle, which requires IAM practitioners to adopt a change of mindset.
Personally, I'm excited by the changes we're seeing in the IAM landscape right now. Many of them are long overdue, but change is always painful and I suspect that there will be many in the community who cling onto the old way of doing things. That is unfortunate, but an inevitable fact of life in this business.
Labels:
Best Practices,
Identity Governance,
Provisioning,
Security
Wednesday, September 28, 2011
The Dirty Secret of Identity Management
When engaging with a customer, I like to begin by asking them a simple question: What is the first thing you think of when you hear the term Identity Management? In nine out of ten cases, they will mention user provisioning. Yes, we all love the idea of automated user provisioning. Provisioning rocks. In fact, provisioning is arguably the most prominent driver for most identity management projects.
There is one small problem, and it is a problem that most IAM practitioners are often terrified to admit. Provisioning projects are rarely successful. By which I mean that they generally fail to deliver promised benefits. Based on my own experience and that of other IAM professionals with whom I've discussed this topic, it is extremely rare for an organization to implement automated provisioning for more than 20% of all systems, regardless of what identity management product they are using. This isn't because commercial identity suites are technically inadequate, or even because organizational bottlenecks prevent the introduction of automated provisioning processes. It is because ultimately, provisioning is only a relatively minor element of a comprehensive IAM strategy, and practitioners have a tendency to overstate its importance. I've seen numerous cases where IAM project leads have focused on universal provisioning like Captain Ahab pursuing a whale, and their refusal to acknowledge that identity management involves much more than the mere deployment of provisioning connectors and workflows can lead to catastrophic project failure.
In the abstract, the expected benefits of automated provisioning make perfect sense. Operational efficiencies, reduced labor costs and faster onboarding are all legitimate metrics for an identity management ROI. But as identity management has matured, it has become increasingly apparent that provisioning is perhaps not quite as important as many practitioners once thought, and there is a compelling case to be made that identity governance has emerged as a more critical factor in the success of IAM projects.
To demonstrate this, let's begin by decomposing the various phases of a typical identity provisioning lifecycle:
There is one small problem, and it is a problem that most IAM practitioners are often terrified to admit. Provisioning projects are rarely successful. By which I mean that they generally fail to deliver promised benefits. Based on my own experience and that of other IAM professionals with whom I've discussed this topic, it is extremely rare for an organization to implement automated provisioning for more than 20% of all systems, regardless of what identity management product they are using. This isn't because commercial identity suites are technically inadequate, or even because organizational bottlenecks prevent the introduction of automated provisioning processes. It is because ultimately, provisioning is only a relatively minor element of a comprehensive IAM strategy, and practitioners have a tendency to overstate its importance. I've seen numerous cases where IAM project leads have focused on universal provisioning like Captain Ahab pursuing a whale, and their refusal to acknowledge that identity management involves much more than the mere deployment of provisioning connectors and workflows can lead to catastrophic project failure.
In the abstract, the expected benefits of automated provisioning make perfect sense. Operational efficiencies, reduced labor costs and faster onboarding are all legitimate metrics for an identity management ROI. But as identity management has matured, it has become increasingly apparent that provisioning is perhaps not quite as important as many practitioners once thought, and there is a compelling case to be made that identity governance has emerged as a more critical factor in the success of IAM projects.
To demonstrate this, let's begin by decomposing the various phases of a typical identity provisioning lifecycle:
- Aggregation: Core identity data is seeded from an authoritative source such as an HR system or contractor database.
- Reconciliation/Correlation: Existing accounts on target systems are discovered and inspected, and identities are linked to accounts on target systems using correlation rules.
- Request: The user requests an entitlement on the target system. Alternatively, a request can be initiated by manual or birthright assignment of a role.
- Approval: A designated individual approves or rejects the request.
- Provision: The user is granted an entitlement on the target system.
When one considers identity lifecycle in these terms, it puts the importance of provisioning into stark perspective. Let's think about it another way. If we remove provisioning from the above lifecycle and just focused on the first four phases, would the ROI for identity management be significantly reduced? Think about how much easier it would be to deploy an identity management solution if provisioning were not a factor. I'll come back to this momentarily.
From a purely technical perspective, automated provisioning isn't terribly complex. Most commercial identity solutions ship with provisioning connectors for the vast majority of common enterprise systems and provide frameworks that enable the rapid development of connectors for custom applications. So why is automated provisioning so difficult to implement? The most common factor is poor data quality in target systems. This creates the risk of corrupting existing entitlements data when the provisioning system attempts to compensate for inconsistencies. Another reason is that system owners refuse to allow a provisioning system to touch their data. These two factors alone can create enormous bottlenecks for a provisioning-centric identity management project. Not to mention that some manual business processes, particularly in the Approval-Provision stage, can be extremely difficult to automate since they may require some discretionary judgment on the part of the provisioner. Such actions are not always possible to replace with automated logic, at least not in the short term.
It may sound like I'm contradicting my earlier statement that organizational bottlenecks are less significant than inadequate corporate governance in the failure of provisioning projects, but a robust governance framework—at the core of which is a holistic IAM strategy—is essential to avoid such bottlenecks from arising in the first place.
It may sound like I'm contradicting my earlier statement that organizational bottlenecks are less significant than inadequate corporate governance in the failure of provisioning projects, but a robust governance framework—at the core of which is a holistic IAM strategy—is essential to avoid such bottlenecks from arising in the first place.
When considering the TCO of manual provisioning processes, labor costs tend to be relatively low, as there is usually minimal effort required of provisioners. Besides which, it is not uncommon to find that provisioners have developed their own automated techniques for assigning entitlements on the back end of the provisioning process. More significant labor costs are typically found in loss of productivity incurred by end users waiting for access requests to be approved, in addition to labor costs incurred by the need to perform manual access recertifications.
With all this in mind, let's return to my earlier question. If we take provisioning completely out of the mix, is there a significant impact on the ROI for identity management?
Consider the identity provisioning lifecycle I described earlier. If we raise it to a higher level of abstraction, we can describe it as follows:
- Aggregation/Reconciliation/Correlation
- Request/Approval
- Provision
If you focus on delivering just the first component in the knowledge that there is absolutely no risk of corrupting existing entitlements data, then it becomes possible to build out a large number of connectors and produce a unified view of entitlements data across the enterprise within a very short timeframe. This also creates the opportunity to perform data cleansing, because the identity management system is aggregating and correlating accounts and flagging any inconsistencies.
In the meantime, you can begin to layer centralized request/approval workflows on top of your identity data, using generic, parameterized workflow templates. If designed properly, workflows can be configured to generate a ticket for the human provisioner once all requisite approvals have been completed. Once the entitlement has been provisioned, the provisioner is required to sign into the identity management system and certify the request as completed. This will be verified by the identity management system during the next periodic reconciliation. Delinquent requests, unmatched/orphaned accounts and unapproved entitlements are discovered and flagged for review as part of the reconciliation process.
Finally, once you have analyzed and cleansed your data and the organization has become comfortable with the notion of a centralized identity management platform, you can begin to selectively enable automated provisioning and take human provisioners out of the loop. It still may not be possible to do this for every system, but by this point, the benefits already delivered by the identity management project will have compensated for this. If the request/approval workflows have been designed correctly, then making the switch from manual to automated provisioning should merely involve a configuration change, regardless of what identity product is being used.
Adopting this approach allows you to achieve rapid benefits by providing a unified, enterprise view of who has access to what within a very short space of time, perhaps even within days or weeks. Once you have created this level of visibility into privileges across the enterprise, it is relatively trivial to centralize and automate access recertifications, enable advanced compliance reporting and even begin the process of role analytics.
To the best of my knowledge, SailPoint is currently the only IAM/IAG vendor to have embraced the methodology I just described. They refer to automation as "last mile provisioning", and the architecture of their flagship IdentityIQ product effectively promotes a loosely coupled relationship between provisioning connectors and the core identity governance engine. SailPoint has been enormously successful in proving the efficacy of this approach. Yet in theory, there is nothing to prevent any commercial identity management solution from being deployed in this manner, providing that connectors are configured in read-only mode.
The "last mile" methodology focuses on horizontal rather than vertical introduction of identity services to the enterprise. By reducing the emphasis on automated provisioning in favor of identity governance, it clearly maximizes the chances of a successful implementation without significant impact on ROI.
Labels:
Best Practices,
Identity Governance,
Identity Lifecycle,
Project Management,
Provisioning,
SailPoint
Do you have you head in the cloud?
Let's face it, we technologists have a passion for fads and buzzwords. Just a few years ago, everybody was talking about SOA, and it seemed that every large enterprise was developing a SOA strategy. An army of SOA vendors sprang up from nowhere, offering to help companies become "SOA compliant" (whatever the heck that meant). CIOs, in many cases dazzled by slick marketing executives, issued directives to their staff about the urgent need to embrace SOA or perish. Never mind that SOA had existed in various forms since the release of CORBA in the early 1990s, and perhaps even before that. It simply hadn't had a cool acronym like SOA before.
The same is true of AJAX. Everybody wants to build AJAX applications today, as if AJAX represents a revolutionary breakthrough in web technology. Yet the underlying concept of AJAX (which, let's be honest, is nothing more than Javascript and DHTML) has been in widespread use since the Netscape/IE days. Hell, I recently unearthed an IE5 developer guide from 1999 in my basement, which contains code samples almost indistinguishable from what you can find in most AJAX applications today. We just didn't call it AJAX in those days.
My point in this. If you spend long enough in I.T., you begin to see the same architectural trends and patterns repeat themselves every few years with different marketing labels attached. Which brings me neatly to cloud computing. Perhaps I'm becoming old and cynical, but a cloud application is in most cases little more than a very sophisticated website. For instance, GMail is widely described as a cloud application, but it is ultimately just a web based email client. Hotmail was providing exactly the same service in 1996, albeit in a more primitive form. The same applies to Box.net (XDrive in the 1990s) and even Facebook (Geocities circa 1994). Admittedly, present day cloud services (see, even I'm using the term now) are often characterized by open APIs that enable integration with other systems, but exposure of an API is not mandatory to be classified as a cloud application. If you have a slick website that delivers a service to end users, nobody is going to correct you for calling it a cloud application.
The IAM community isn't immune to the lure of faddish marketing terms. We often talk about "cloud based identity" or "identity in the cloud", as if we are describing revolutionary concepts. But context is important, because cloud-based identity can mean two very distinct things.
The first definition represents a scenario where a user has an identity in the "cloud" that needs to be managed (for instance if they have a Google Apps or Box.net account). This may imply a requirement not just to provision accounts to a remote service provider, but also to federate credentials across organizational domains.
The second definition is where an identity management service itself is hosted in the "cloud" instead of within the enterprise.
These are two very different concepts, and should not be confused with each other.
These are two very different concepts, and should not be confused with each other.
Let's begin by exploring the first definition. Managing identities within cloud applications is conceptually no different from managing identities in any other system. You need an API to perform CRUD operations and discovery on the resource. This in turn is abstracted by a provisioning connector, which is invoked by your identity management product of choice. There is no magic involved. A cloud application is nothing more than a JAR (Just Another Resource). Of course, the functionality offered by a cloud connector may be constrained by the capabilities of the public API, but that is true of any resource.
As for federated single sign-on, well, that has been a standard use case since long before anybody began using the term "cloud computing". SAML 1.0 was first published in 2002, and even before there were standards such as SAML, architects such as myself were designing solutions that enabled federated single sign-on to external service providers using proprietary techniques for identity assertion.
As for federated single sign-on, well, that has been a standard use case since long before anybody began using the term "cloud computing". SAML 1.0 was first published in 2002, and even before there were standards such as SAML, architects such as myself were designing solutions that enabled federated single sign-on to external service providers using proprietary techniques for identity assertion.
The second definition of cloud-based identity is an entirely different matter. I may be wrong, but I'm still not convinced that most organizations are ready to surrender their identity management processes to an external provider. It could be argued that many companies have already gone down the path of outsourcing core HR services, so why should identity management be any different? That's a legitimate proposition, but misses the point of identity management.
Much of the cost of implementing an identity solution exists not in infrastructure, licenses or even support staff, but in the decomposition and automation of frequently convoluted business processes, which require initial and ongoing configuration of even the most sophisticated identity suites. Such costs cannot be avoided simply by moving to a cloud-based identity management offering, as customer-specific configurations will still be necessary, the only difference being that implementers would use an externally hosted web interface as opposed to one hosted on the internal network.
Then there is the issue of how a remote identity service pushes changes to secure systems inside the corporate network. System owners are typically nervous enough about allowing an internal identity management solution to touch their user data, never mind a solution that is hosted externally.
Much of the cost of implementing an identity solution exists not in infrastructure, licenses or even support staff, but in the decomposition and automation of frequently convoluted business processes, which require initial and ongoing configuration of even the most sophisticated identity suites. Such costs cannot be avoided simply by moving to a cloud-based identity management offering, as customer-specific configurations will still be necessary, the only difference being that implementers would use an externally hosted web interface as opposed to one hosted on the internal network.
Then there is the issue of how a remote identity service pushes changes to secure systems inside the corporate network. System owners are typically nervous enough about allowing an internal identity management solution to touch their user data, never mind a solution that is hosted externally.
From an architectural standpoint, a cloud-based identity management solution typically requires deployment of a "gateway" or "interceptor" in the DMZ, which handles inbound CRUD requests and pushes them to the appropriate resource. This introduces a potential point of failure and requires the procurement of additional hardware, which offsets some of the savings realized by outsourcing the identity solution itself.
I'm not completely dismissing the notion of managed identity solutions. Cloud based identity management makes perfect sense for SMBs, who are more likely to have simpler use cases, fewer business processes and perhaps cannot justify the investment in a commercial identity suite. But for a large enterprise, the case for cloud based identity management is somewhat less compelling.
Nevertheless, there is a compromise approach that has much greater potential. I expect to see widespread adoption of identity management appliances over the coming years. Appliances are preconfigured and optimized to run a specific IAM suite, and simply need to be plugged into the network. There are several advantages to an appliance. First, there is less likelihood of instability or performance degradation, as the hardware has already been optimized to run an enterprise IAM solution. Second, both the hardware and software is often supported by the same vendor, so there is less chance of finger pointing should issues arise. Third, customers may realize cost avoidances from not having to procure separate database, application server and IAM product licenses. And finally, streamlined upgrade and patch management can reduce operational burden.
In the meantime, cloud computing is here to stay. At least until the next bright shiny buzzword.
Reflections on Sun Identity Man.... I mean, Oracle Waveset
It doesn't seem like two and a half years since Oracle shocked the technology world by announcing its $7 billion acquisition of Sun Microsystems, although a tremendous amount has happened in that time. For the most part, it has to be said that Oracle has done an outstanding job of assimilating Sun's technologies into their product portfolio. But in the IAM space, where it is not unusual for product migrations to take several years, there is still a great deal of work to be done.
Sun Identity Manager, which has been rebranded as Oracle Waveset and will reach end of life in 2014, continues to have an enormous footprint in the enterprise. Even a year and a half after Oracle announced EOL for the product, we still see customers continuing to develop and evolve their SIM/OW implementations, and in many cases express little appetite for migrating to OIM or any other identity suite, at least in the short term.
Some of us in the IAM community were more than a little surprised at Oracle's decision to select OIM over OW as its strategic provisioning solution, considering OW's extensive deploy base and maturity. But in retrospect, it was probably the right decision given the significant investment Oracle had already made in enhancing their existing IAM stack, of which OIM is a core component. With the release of OIM 11gR2, it is hard to deny that OIM is a superior product to OW in almost every respect.
The great thing about OW was that you could literally customize the tool to do anything you wanted. Its biggest flaw was that you could literally customize the tool to do anything you wanted. The open nature of OW was an engineer's dream. If a customer requirement wasn't met by out-of-the-box functionality, it could easily be provided by customizing the product using Java APIs and/or XPRESS scripts. Hell, I've even seen interfaces for OW built in .NET. Even after all this time, I'm still astounded by the creativity of some of the OW implementations out there, although I'm just as often left shaking my head in dismay, wondering how customers have allowed themselves to get into trouble with such poorly conceived customizations.
Typically, OW customers have made an enormous investment in developing sophisticated implementations and the specialized skillsets that are required to maintain them, so it is unreasonable to expect them to turn on a dime. Many implementations involve custom resource adapters, heavily customized rules and workflows, and sophisticated user interfaces. This makes it nearly impossible to automate the migration of OW to OIM or any other product, since no two OW deployments are identical.
Oracle has made a valiant attempt to smooth the migration path to OIM by releasing the OW2OIM migration tool. This tool performs a functional mapping between discovered OW configurations and their OIM equivalents, and can even automate the migration of certain objects (although not more sophisticated objects such as user forms, rule libraries and task definitions). At Qubera, we have developed our own upgrade assessment tool, which performs a deep dive analysis of an OW implementation and produces both an impact assessment and an LOE estimate. We are also planning to release a white paper, describing some architectural best practices and implementation strategies for migration.
Tools and assets such as these are intended to ease the pain of migration, rather than facilitate the migration itself. There is no simple and definitive upgrade path for OW customers, and it is misleading to suggest otherwise.
From a technical perspective, most OW functions now have functional equivalents in OIM, even if the nomenclature isn't always the same. There are notable differences, of course, such as in OIM's role capabilities; the advanced role features available to OW customers are now effectively divided between OIM and OIA. OIM's BPEL workflow engine will be alien to OW engineers, although it is far superior. Active Sync, which is one of OW's most commonly used features, does not have a direct equivalent in OIM, and of course all of those XPRESS forms and rule libraries will have to be completely rewritten. Furthermore, OW engineers making the shift to OIM will have to familiarize themselves with terms such as Generic Technology Connector, Adapter Factory, Trusted Source/Trusted Target Reconciliation, Lookup Definition and User Defined Field. Make no mistake, OW and OIM are fundamentally different products, even though they offer similar capabilities.
Skills learned in the OW world are not easily transferrable to OIM, implying a steep learning curve for OW engineers, who will have to forget many of the concepts they have long taken for granted. But change is inherent to the nature of I.T., and any technology professional worth their salt should be salivating at the opportunity to evolve their skills.
So where does all this leave customers, especially in an era of increasingly constrained budgets? In some cases, they have chosen to stay the course with OW and merely run down the clock (perhaps hoping that Oracle will have a change of heart and either provide indefinite support or decide to release OW to the open source community). Some customers have decided that the effort required of an OIM upgrade warrants a reevaluation of their IdM product strategy. And of course, some customers are biting the bullet and preparing to upgrade. Every organization is faced with different challenges; budgetary considerations, infrastructure demands, resource constraints, vendor relationships, compliance mandates and political in-fighting. Thus upgrading from OW to OIM is not always a no-brainer for IAM technology owners, especially given the vast differences between the two products. As somebody who spent more than a decade as an IAM customer and led numerous RFPs, I completely understand how these decisions are made, and the arcane considerations that frequently inform them.
On a strictly personal note, I am sad to see the end of OW. It was, and still is, an outstanding product and I have many great memories (and more than a few not-so-great ones) of implementing it. The sense of triumph from building my first custom adapter, the initial frustration and eventual mastery of my very first approval workflow, the sleepless nights wrestling with the mysteries of XPRESS form handlers, the patient experimentation with undocumented APIs, and the sense of relief following the success of my very first production deployment. But those days are behind us now, and let's face it, we all have a tendency to view the past through rose-tinted glasses (which is probably why my iPod is crammed with 80s music that I tend to remember being much better than it actually was).
I have no idea whether Oracle will have a change of heart on OW support, although given the scale and complexity of many OW deployments, I have a hard time believing that all OW customers will have migrated by the end of 2014 (hell, there are still production deployments of Sun Identity Manager 5.0 out there). But for a frame of reference, consider how long it has taken organizations to migrate away from JD Edwards. It is nearly ten years since PeopleSoft acquired JDE, and six years since Oracle's acquisition of PeopleSoft, yet JDE continues to thrive under the Oracle umbrella and will be supported indefinitely.
Something tells me we haven't seen the last of Sun Identity Man.... I mean, Oracle Waveset.... by a very long shot.
Sun Identity Manager, which has been rebranded as Oracle Waveset and will reach end of life in 2014, continues to have an enormous footprint in the enterprise. Even a year and a half after Oracle announced EOL for the product, we still see customers continuing to develop and evolve their SIM/OW implementations, and in many cases express little appetite for migrating to OIM or any other identity suite, at least in the short term.
Some of us in the IAM community were more than a little surprised at Oracle's decision to select OIM over OW as its strategic provisioning solution, considering OW's extensive deploy base and maturity. But in retrospect, it was probably the right decision given the significant investment Oracle had already made in enhancing their existing IAM stack, of which OIM is a core component. With the release of OIM 11gR2, it is hard to deny that OIM is a superior product to OW in almost every respect.
The great thing about OW was that you could literally customize the tool to do anything you wanted. Its biggest flaw was that you could literally customize the tool to do anything you wanted. The open nature of OW was an engineer's dream. If a customer requirement wasn't met by out-of-the-box functionality, it could easily be provided by customizing the product using Java APIs and/or XPRESS scripts. Hell, I've even seen interfaces for OW built in .NET. Even after all this time, I'm still astounded by the creativity of some of the OW implementations out there, although I'm just as often left shaking my head in dismay, wondering how customers have allowed themselves to get into trouble with such poorly conceived customizations.
Typically, OW customers have made an enormous investment in developing sophisticated implementations and the specialized skillsets that are required to maintain them, so it is unreasonable to expect them to turn on a dime. Many implementations involve custom resource adapters, heavily customized rules and workflows, and sophisticated user interfaces. This makes it nearly impossible to automate the migration of OW to OIM or any other product, since no two OW deployments are identical.
Oracle has made a valiant attempt to smooth the migration path to OIM by releasing the OW2OIM migration tool. This tool performs a functional mapping between discovered OW configurations and their OIM equivalents, and can even automate the migration of certain objects (although not more sophisticated objects such as user forms, rule libraries and task definitions). At Qubera, we have developed our own upgrade assessment tool, which performs a deep dive analysis of an OW implementation and produces both an impact assessment and an LOE estimate. We are also planning to release a white paper, describing some architectural best practices and implementation strategies for migration.
Tools and assets such as these are intended to ease the pain of migration, rather than facilitate the migration itself. There is no simple and definitive upgrade path for OW customers, and it is misleading to suggest otherwise.
From a technical perspective, most OW functions now have functional equivalents in OIM, even if the nomenclature isn't always the same. There are notable differences, of course, such as in OIM's role capabilities; the advanced role features available to OW customers are now effectively divided between OIM and OIA. OIM's BPEL workflow engine will be alien to OW engineers, although it is far superior. Active Sync, which is one of OW's most commonly used features, does not have a direct equivalent in OIM, and of course all of those XPRESS forms and rule libraries will have to be completely rewritten. Furthermore, OW engineers making the shift to OIM will have to familiarize themselves with terms such as Generic Technology Connector, Adapter Factory, Trusted Source/Trusted Target Reconciliation, Lookup Definition and User Defined Field. Make no mistake, OW and OIM are fundamentally different products, even though they offer similar capabilities.
Skills learned in the OW world are not easily transferrable to OIM, implying a steep learning curve for OW engineers, who will have to forget many of the concepts they have long taken for granted. But change is inherent to the nature of I.T., and any technology professional worth their salt should be salivating at the opportunity to evolve their skills.
So where does all this leave customers, especially in an era of increasingly constrained budgets? In some cases, they have chosen to stay the course with OW and merely run down the clock (perhaps hoping that Oracle will have a change of heart and either provide indefinite support or decide to release OW to the open source community). Some customers have decided that the effort required of an OIM upgrade warrants a reevaluation of their IdM product strategy. And of course, some customers are biting the bullet and preparing to upgrade. Every organization is faced with different challenges; budgetary considerations, infrastructure demands, resource constraints, vendor relationships, compliance mandates and political in-fighting. Thus upgrading from OW to OIM is not always a no-brainer for IAM technology owners, especially given the vast differences between the two products. As somebody who spent more than a decade as an IAM customer and led numerous RFPs, I completely understand how these decisions are made, and the arcane considerations that frequently inform them.
On a strictly personal note, I am sad to see the end of OW. It was, and still is, an outstanding product and I have many great memories (and more than a few not-so-great ones) of implementing it. The sense of triumph from building my first custom adapter, the initial frustration and eventual mastery of my very first approval workflow, the sleepless nights wrestling with the mysteries of XPRESS form handlers, the patient experimentation with undocumented APIs, and the sense of relief following the success of my very first production deployment. But those days are behind us now, and let's face it, we all have a tendency to view the past through rose-tinted glasses (which is probably why my iPod is crammed with 80s music that I tend to remember being much better than it actually was).
I have no idea whether Oracle will have a change of heart on OW support, although given the scale and complexity of many OW deployments, I have a hard time believing that all OW customers will have migrated by the end of 2014 (hell, there are still production deployments of Sun Identity Manager 5.0 out there). But for a frame of reference, consider how long it has taken organizations to migrate away from JD Edwards. It is nearly ten years since PeopleSoft acquired JDE, and six years since Oracle's acquisition of PeopleSoft, yet JDE continues to thrive under the Oracle umbrella and will be supported indefinitely.
Something tells me we haven't seen the last of Sun Identity Man.... I mean, Oracle Waveset.... by a very long shot.
Tuesday, September 27, 2011
Why Semantics Matter
It is all too easy for IAM practitioners to fall into the
trap of thinking about identity management in terms of products, connectors,
workflows, accounts, entitlements and correlation rules. But to your average
business stakeholder—you know, the folks who ultimately determine whether an
IAM project will succeed or fail—such terms are meaningless.
Years ago, before I switched careers and became an I.T.
professional, I worked for an investment bank. Even though my official job
title was “portfolio officer”, I was what we technologists commonly refer to as
a “business customer”, using systems that provided highly sophisticated trade
execution and settlement functions. Even though I had a conceptual
understanding of how these systems worked, I was too busy doing my real job to
know or care that when I received an error after executing a spot FX transaction, it
was because a pointer to an object in our transaction database had miscast as an unsigned integer. I just expected the problem to be fixed, pronto. And when it
wasn’t, I assumed that was because our I.T. folks were overpaid layabouts whose
job security depended on them being able to confuse normal people like me with acronyms
and technical gibberish.
Of course, after more than a decade on the other side of
the fence, I know better. Yet even though the line between I.T. and the business is far less distinct than it was in the early 1990s, the semantic disconnects are the same as they ever were. The only difference these days is that now I'm one of those overpaid layabouts I used to disdain. So whenever I’m in a conference room, listening to
I.T. staff discuss the concepts of identity management with “business
customers”, I’m often reminded of those days, and can determine from the
expressions on the faces of the business folks exactly what they are thinking.
Identity management is a veritable minefield of semantic
misunderstandings, and failure to address them at an early stage of an IAM
project can be fatal. Not only because it can exacerbate tensions between I.T. and
the business, but because it can cause critical requirements to be
misinterpreted. To preempt such issues, I always recommend creation of a
glossary as part of the “Define” phase of an IAM project. That suggestion often
elicits rolled eyes (mostly from other technologists, who are dismayed at the
prospect that not everybody is as familiar with technical concepts as they are). My
rule of thumb is this. If my mother-in-law cannot grasp the conceptual design
of a solution, then I have no right expecting a business customer to understand
it either. Not that I run every design past my mother-in-law, of course, but
I’m sure you get my meaning.
The failure to appreciate the importance of semantics when
deploying an identity management solution doesn’t just apply to implementers. In many cases, it also extends to product vendors. The business
doesn’t think of identity management in terms of “Create User”, “Delete User”,
“Modify User”, “Enable User” and so on. They are more likely to think of it in
terms of lifecycle events (i.e. “New Hire”, “Change in Job Function”, “Name
Change”, “Staff Augmentation”, “Contractor-to-FTE”, “Leave of Absence” and
“Termination”). Yet very few products ship with configurable
workflows that directly address these basic use cases without extensive
customization.
An even greater semantic disconnect applies to entitlements
management. When not even I.T. professionals can agree on the true definition
of an entitlement (never mind a role), how can we reasonably expect to find
common ground with business stakeholders? To some people, a role is synonymous
with an LDAP, AD or RACF group (this couldn’t be more incorrect, by the way,
but that’s a topic for another post). To others, it means a specific capability
within a system, a job function or the ability to access a file share (technically the same thing as an entitlement, which opens a whole different can
of semantic worms). As for the difference between fine-grained and
coarse-grained entitlements, prepare to be met with glazed eyes if you choose
to go down that particular rabbit hole.
While an IAM practitioner may view users as abstract blocks
of data to be managed, business stakeholders view them as individuals who have job
functions that cannot always be clearly expressed in terms of neatly labeled entitlements and
permissions. An IAM architect may understand the difference between a user and
an account, but such distinctions are not always intuitive for those who haven’t
spent their careers thinking about identity governance. I recall one meeting several
years ago where a security architect described users as “subjects”, a term
familiar to any of us who have worked with access management solutions. He used
this word several times while describing his solution. Eventually, a business
analyst bravely raised her hand and asked him what the heck he was talking
about. That opened a can of worms, from which it became apparent that the
entire context of his briefing had been lost on the non-technical folks in the
room. All because of one misunderstood word.
The truth of the matter is that to business customers,
identity management is simple. As well it should be if we all speak the same
language. On the I.T. side of the house, we may be wrestling with architectural
nuances and product constraints, but those are not the reasons why most
identity projects are unsuccessful. Failure to engage and communicate
effectively with business stakeholders is a far more common factor. Our mission as IAM practitioners is to abstract the business from technical complexity, and that isn't always possible if we aren't speaking the same language.
Of course, this is an important consideration for any
enterprise I.T. project, but in a discipline such as IAM, which is so heavily
dependent upon well-defined business processes, the importance of defining and
obtaining sign-off on a common glossary cannot be overstated enough. This is just one of the many reasons why it is critical to engage an experienced business analyst for any identity management project.
And one last piece of advice to my fellow geeks. Please don’t
look at your business customers with disdain when you use jargon that causes their eyes to
glaze over. After all, you probably couldn’t do their jobs any more than they
could do yours. Besides which, they probably think that you’re an overpaid
layabout whose job security depends on your ability to confuse them with
gibberish.
My mother-in-law would probably agree with them.
Subscribe to:
Posts (Atom)