“Tier 1” Apps are Special…But At What Cost?

Greetings everyone, and happy-almost-spring. Lately I’ve been focusing on understanding the costs (in all its forms) of delivering and managing Tier 1 applications. For the sake of discussion, let’s define a “Tier 1” app in terms of reliability or “quality of service”. In a previous post , I discussed four tiers of criticality for enterprise applications. While you may disagree with some of the names, there should be little argument that there are classes of applications that are truly critical to the success or failure of a business; the so-called “mission critical” application. These apps are also referred more generically to be called “Tier 1 applications”. These applications hold a special level of importance in the corporate enterprise because their failure (measured in terms of reduced service quality or complete outages) would have a profound effect on the business including any or all of the following: Widespread business stoppage with significant revenue impact Risk to human health/environment Public, wide-spread damage to organization’s reputation Company-wide productivity is compromised Examples of these types of applications are eCommerce (amazon.com, ebay, etc.), 911 response systems, stock and commodity trading systems, and airline reservation systems (some would also put CRM and corporate email into this group too). ( Note : while some have referred to Tier 1 apps with examples such as Exchange, SharePoint, SQL, Oracle, DB2, etc., I claim they are missing the point. With the possible exception of SharePoint, these other examples support the application and need to be treated as part of the overall solution, not as the solution itself.) It’s obvious that these applications are important to the business for the reasons listed above as well as others. They represent a significant importance to the business when they run well and a huge impact to the business when they don’t. Tier 1 Apps Put Quality and GRC Ahead of Cost What I find interesting, however, is that these apps hold a special place in the minds (and the wallets) of business and IT leaders. Despite the IT maturity of an organization, companies will “invest” whatever it takes to keep these applications up and running with the highest levels of quality expected of their customers. Even for organizations that do not have a culture of IT maturity improvement, Tier 1 apps will always enjoy financial and human resource availability to ensure those apps remain highly available. While the driver for most applications in the organization (60%-80%) is cost (delivery and ongoing maintenance), Quality of Service (QoS) and Governance, Risk Management, and Compliance are foremost. The implications of this can be profound, particularly for companies that do not have a practice of IT maturity improvement. Highly mature organizations imbue the practices of service delivery with high quality, high compliance, and low risk across the entire portfolio of their service catalog, without incurring the huge costs of maintenance. Less mature organizations, on the other hand, will tend to be reactive in nature and waste resources to ensure these Tier 1 applications remain healthy. The costs incurred can come from many sources such as expensive consulting resources, inefficient, time-consuming processes, and an over-reliance on expensive technologies. In short, these organizations will throw whatever is necessary at a Tier 1 app to keep it up and running to meet any explicit or implicit quality and compliance bars exist. What to Do? Learn The Lessons from Tier 1 App Delivery and Management Regardless of how you define it, every business of any appreciable size has Tier 1 applications. Unfortunately, many IT organizations do not have a very high level of IT maturity and yet, these Tier 1 apps demand it. As a result of this gap, significant wasteful costs are incurred to keep them up and running. Where there are pockets of good, mature IT practices, they probably exist within the realm of Tier 1 service delivery. Unfortunately, what also exists is REALLY bad process too, all in service of maintaining high quality of service. In my role as an enterprise consultant for many years I’ve seen countless “all hands on deck” events when a Tier 1 app went down. There was a mad scramble to restore service, all the while work on other important IT functions was put aside. For that reason, IT organizations should look at the mature practices and policies they do have have (many of which will be implied) for their Tier 1 apps and see how to apply them across their IT service portfolio, but not simply because it’s “good practice.” The organization needs to also take a hard look at recent emergency situations as much to understand what cost is being incurred to restore service as to understand how to minimize their occurrences. By using the lessons learned from their Tier 1 app efforts (both the good and bad), IT organizations will reduce their overall delivery and operating costs by becoming more efficient in the deliver of IT services through such activities as: Rationalizing the costs of high availability Reducing the reliance on expensive consulting and support resources Becoming smarter and more targeted about information security Designing apps with the right level of service (how many “9s” are needed?) Resolving incidents more quickly with appropriate service monitoring Conclusion The overall message is simple and taken from an old adage: an ounce of prevention is worth a pound of cure. IT organizations and the businesses they support will lower their overall delivery and operations costs when they look to the best practices learned from the delivery and maintenance of their Tier 1 apps and apply them generally across their organization. All the Best, Erik Svenson, Application Platform Lead, War on Cost Team

IT Maturity Is More Than Process Improvement

For years, IT industry consultancies as well as hardware and software vendors have talked about people, process, and technology as the three corners of the success triangle within an IT shop. And yet the various maturity models that exist haven’t yet married these three elements with the levels of maturity they describe. IT Maturity is often thought of in terms of process maturity or technical maturity. In the realm of IT maturity models, there are no shortage of frameworks that cover this area with Capability Maturity Model Integration (CMMI), COBIT , Microsoft’s Infrastructure Optimization (IO) Model , and Gartner’s Maturity Model being among the most popular. (Process improvement models such as ITIL and the Microsoft Operations Framework (MOF) share a close relationship with these maturity models but don’t, in and of themselves, promote a specific path to improved IT maturity.) Unfortunately, none of these models do a complete job of clearly defining the various maturity levels they espouse. The levels that are defined as part of their models are descriptive in nature without clear boundaries described between the various levels that give quantifiable measures of success. Further, the descriptions themselves do not cover all aspects of maturity. CMMI and Gartner, for example, focus exclusively on process improvement in which each of the maturity levels describe better and better states of process maturity. The Microsoft IO Models consistently define maturity in terms of an organization’s ability to automate processes. What is needed is a unified maturity model that incorporates what it means for an IT organization to be mature around the following: People The right staff is acquired and retained for the right job. They have access to appropriate training and, at the highest levels of maturity, have appropriate industry certifications and training credentials. Process While this area has been covered in depth by most of the models, process maturity should also include why processes are being improved in the first place. CMMI, for example, defines Key Process Areas (KPAs) such as “Requirements Management”, “Product Integration”, “Causal Analysis and Resolution” to name a few. While these process areas are categorized into various maturity levels, they are not inherently linked to specific measures of business value. Improving a process that isn’t measurably linked to enhancing the business’s effectiveness around any of the four War on Cost Value Pillars ( https://blogs.technet.com/itbizval/archive/2009/02/27/the-role-of-information-technology-in-today-s-economy.aspx ) is a waste of time and effort. Technology This area has also been covered in detail by various models such as the Microsoft IO Model. The technology aspect, however, is a means to an end, not an end in and of itself. Very often, technology is used as a means to improve processes by automating them. By focusing on process automation, the value of technology is sold short. Technology should be utilized to help improve the maturity of people as well by maturing training, access to information, as well as the quality of that information. The War on Cost team is in the process of studying these concepts further to help unify these areas of maturity so that a more practical model of how technology can and should help customers become more mature can be defined. More to come. Happy New Year everyone! On behalf of the War on Cost team (Elliott, Bruce, Brett, and Erik), we hope you have a prosperous and successful 2010! All the best, Erik War on Cost Application Platform Lead

Practices vs. Processes

In the world of IT as in most industries, the term “Best Practice” is often used to describe a set of activities that a business would consider as representing the most effective set of activities for achieving a particular goal. But what about all those other activities that are not considered “best practice”? I would contend that many activities that an IT organization does would not be called a best practice. They may be effective, but not most effective. As an example, an organization might manually distribute client install discs for an application rather than create an automated software distribution mechanism. As another example, a development team may test an application completely manually rather than using techniques such as code coverage analysis and automated testing tools to increase software quality. In both these examples, the job gets done but with significantly differing results on quality, cost, agility, and/or risk. I’d like to suggest that both these cases above are examples of what I would refer to as “Practices”, borrowing the Wiktionary definition as “ The ongoing pursuit of a craft or profession , particularly in medicine or the fine arts.” Forgetting the “particularly in medicine or the fine arts” bit, I find this concept is useful because it provides a way to group activities that can be aligned along a maturity curve such as the Microsoft Infrastructure Optimization Model . A practice could be called “Test Pre-production Application” for example, containing such activities as: Develop test plans Perform manual testing Perform code coverage analysis Perform automated testing etc. As you can probably see, some of the sample activities above could be performed at a very basic level, whereas some could be more advanced, requiring sophisticated sequences and a higher degree of automation. Further, a practice is inherently associated with one or more of the War on Cost Value Pillars : Agility, Quality of Service, Cost, or GRC (governance, risk management, and compliance). Practices Are Not Processes There are many models out there that describe processes. COBIT for example, defines 34 core processes across four domains . ISO 20000 and other organizations also describe groups of processes as part of their model. While these processes and have their place, what they lack is a distinct linkage to sets of activities that can be assessed for maturity along some value curve such as service quality, agility, or compliance. For example, the COBIT process “Ensure Continuous Service” does not say anything about how well the organization performs that process. Also, these processes do not inherently align to specific value measures that are relevant to the business. While some of them may be implied, no model currently published makes this clear. The Value of Practices By defining sets of activities into practices and further classifying those activities into groups based on a maturity curve such as the core Microsoft Infrastructure Optimization (IO) model , it will be possible for us to assess how effective an IT organization is at delivering a service by determining how mature the organization is at delivering it (i.e., what activities does the organization perform at a given maturity level?). Also, we can also align these practices to specific value measures that the organization wants to better understand. For example, practices such as “Manage Policies & Compliance” and “Perform Change Impact Analysis” have clear alignment to Governance, Risk Management, and Compliance (GRC). An organization that has a clear goal to assess their maturity around GRC can look at specific practices that are associated with that value dimension. A Business Value Model As the War on Cost team matures this definition of Practices, we will publish more information about how it relates to our larger definition of the War on Cost business value model. More to come… All the best, Erik

Microsoft IT Strengthens Security with Data Loss Prevention (Article)

With information residing in a multitude of places, enterprises face growing risks of inadvertent or malicious leaks. The integration of Active Directory Rights Management Services into RSA Data Loss Prevention products provides a very effective solution for Microsoft IT to locate and protect sensitive data.

Learn Project 2007 quickly with the Quick Reference Guide

This just in from Toney Sisk in on the IW writing team: The blog title says it all—Quick. Project management methodology can be a complex jungle of concepts. One way to help you through the jungle is with a reference guide. This popular download maps the features in Microsoft Project 2007 with commonly accepted project management practices and procedures. Click the image below to download your copy for easy browsing. Or print it out for easy access.

Application Criticality

I recently completed a study of over 500 US-based customers in which I wanted to understand: what types of applications are being delivered and managed generally, how much does it cost to deliver and manage those applications how are web services being used how well are cloud computing solutions being adopted In my first post about what I learned from this investigation, I wanted to first share my thoughts about the concept of “application criticality”. We’ve all heard the term “line of business application”. While there are many specific definitions of what an LOB app is, a generally accepted definition is: An application that is vital to running a business This extremely vague term, while generally descriptive, doesn’t get down to the level of specificity needed to understand how one LOB app compares to another in importance, scope, or complexity. In order to better understand the population of applications being delivered and managed, I needed to get to a lower level of granularity. Our team therefore defined the concept of Application Criticality to better distinguish line of business apps from each other in terms of their importance to the business as well as their relative scope of influence on the business. Below are the 4 levels of criticality and their definitions. The descriptions of these levels of criticality are defined in terms of the impact on the business if these applications become unavailable. While there may be other ways to define these classes of criticality, we find it useful to refer to them in terms that line of business owners typically care about. Criticality Level Failures of applications in this class can result in: Mission Critical