In my last post I referred to a study my team was conducting around SOA and web services management patterns. The high-level purpose of the study was to gain insight around: whether service-orientation practices were being used for building web services (as opposed to simply building a bunch of independent services), what service-orientation practices were being used such as service discovery, governance, etc., and if service-orientation practices were being used, was there a positive benefit to the business around agility, application availability, manageability, performance, and scalability What I’m finding in the recent data is that in no case are companies taking on SOA as a strategic initiative, driven from the business or CIO. Rather, in most cases, those IT shops that see the value of service orientation and build service-orientation practices into their overall software delivery process demonstrate that value to their business constituents after they’ve shown the value of doing so. That value is demonstrated mostly in the form of improved agility and an application’s quality of service. Interestingly, some customers have reported a degradation of scalability and/or performance (more on that in a subsequent post). Once that value is shown, the business embraces the idea and is willing to invest resources into it. For example, in all cases, customers responded that the number of web service implementations will increase in the coming year, in some cases significantly. This is occurring despite most customers not adopting some of the core service-orientation practices such as service discovery or repository implementations. In those cases, it appears that it is just a matter of time and those organizations will adopt those practices at some point in the near future. So, what’s your experience been? Have you seen measurable improvements in manageability, agility, quality of service, etc. or has it been a mixed experience. More to come on this… All the best, Erik, Application Platform Lead, War on Cost
I’m in the process of studying what web services (now just called ‘services’) management costs IT organizations. Specifically, I’m looking at what the various cost factors are considering both an environment with a service-orientation mindset (SOA, ESB, SOI) and without. One thing I’m seeing so far is that the service discovery aspect is significant. Many companies don’t have a registry (UDDI or otherwise) that allows project teams to find services that may already exist. I find this very interesting since the number one (arguably) benefit IT shops hope to gain from adopting services is reuse. How can an organization take advantage of service reuse if there’s no viable way to discover them? The cost to rebuild a service that may exist in another department in the company is probably significant. Not only that, the cost of maintenance and monitoring of the ‘new’ service is not insignificant. Another thing I’m seeing so far is that companies don’t have the capability of treating a ‘service’ as business service from a management perspective. In other words, IT is still in the mindset of ‘managing’ individual services or components without looking at the whole application or ‘business service’. I wonder what your experience is at your organization. Feel free to email me or comment here. I say that service management is the tip of the the overall app portfolio management spear because I’m seeing a trend toward more service orientation where web-based applications are becoming the norm and they are being built from ever-smaller, autonomous, stateless components (i.e. services). More insights will follow as my study progresses. In particular, I hope to quantify the costs of service management as well as bringing some quantification to the value side of the equation. Stay tuned for that. All the best, Erik
There seems to be a steady stream of books published on the role of Information Technology within the business it supports. The role of IT is constantly evolving and has changed significantly from the days when the IT organization was often referred to as “data processing.” Today, in many industries, IT enables some businesses to differentiate themselves from their competitors. Those companies that leverage IT for competitive advantage often differ from their competitors in two ways with respect to their IT organizations: they view IT as a strategic business enabler instead of as a cost center, and they work to maximize the efficiency of their IT operations so that they can focus their resources on providing value to the business and respond to today’s environment of rapidly changing business conditions. Microsoft has developed a model, the Infrastructure Optimization model , and an initiative, the Dynamic Systems Initiative , to assist IT organizations in becoming efficient business enablers for their companies. If you aren’t familiar with the IO model or DSI, we highly recommend you follow the above links and familiarize yourself with the information and resources provided within these two programs. In Bruce’s January 26, 2009 post , he touched upon IT being a business enabler. Bruce also discussed what we see as the four cornerstones that drive IT behavior: Cost Agility Quality of Service And Governance, Risk Management and Compliance (GRC). We recently published the results of a study we did on the IT labor costs of providing core infrastructure workloads. You can learn more about our study by visiting the Spotlight on Cost content on Microsoft.com, where you can register to download a whitepaper of our findings. One surprising discovery in our research was how few companies implement best practices to improve IT efficiency . Of 51 best practices studied across six different workloads (networking, identity and access, data management, print sharing, email and collaboration), the average adoption rate was only 30% – meaning, each of the best practices was implemented on average only 30% of the time. We also found that roughly 70-75% of the companies were operating at the basic maturity level, per the Core IO model. The basic maturity level is the lowest and least optimized level per the model, so this is a very high percentage of companies
Further to my on-going threads re virtualization – make sure that you have checked out the latest update on Virtualization and its benefits that has just been released in the new Microsoft Virtualization site.
Hello to everyone, and our thanks for stopping by our blog. My name is Elliott Morris, and I have the privilege of managing the War on Cost team and working with Brett, Bruce and Erik. Before I start adding posts to our blog, let me tell you a little more about the War on Cost team and what it is we do at Microsoft. At Microsoft, we are always looking to improve our products. There are many people involved in the process for planning a new product or the next version of a product, however our team is unique for a few reasons: We collect significant amounts of operational data from enterprises to understand their costs for deploying, operating and supporting Microsoft software – our product groups do significant research to understand product requirements, but it is our role to understand requirements at
Greetings everyone. I’m Erik Svenson , a biz value strategist in the System Center team at Microsoft and I focus on application platform management costs. I posted a couple weeks ago about the different types of apps out there (web, RIA, rich client, etc.). In all cases, they suffer from one common, cultural issue that plagues the consistent management of enterprise applications: developers aren’t paid to design and build applications with manageability in mind. In my 25 years in the industry, I’ve consistently gotten the message that building manageability into the application is, at best, an afterthought. When I was a developer back in the mid ’80s and early ’90s, the only thought we had around the management aspect of an app was to put in somewhat meaningful messages when an error occurred. There were no conversations with IT about how the app should perform or even a document produced about what the application did. Nope. We just tested it and pushed it out to IT with a request for the right amount of disk and processing capacity (“right” being defined by us developers, by the way!). Part of that was due to the fact that there were no management tools out there and have only become mainstream in data centers in the past fifteen years or so for the distributed computing platform. Now, there’s no excuse. With System Center Operations Manager (SCOM) , WMI and the .NET Framework, we have a rich platform to easily build management capability into applications through custom alerts that are fed into Ops Manager (or any WMI consumer) as well as custom management packs. This is all wrapped in a strategic bow we call the Dynamic Systems Initiative also known as “Dynamic IT”. And yet, few developers do this at all or, it’s an afterthought. Why? Well, I think it’s roots are primarily cultural supported by a lack of incentives. Developers simply aren’t paid to build proactive management capabilities into their applications. Even though it may take just a few lines of C# to do build an alert these days, in the crush of trying to get an app out the door, these tasks are considered nice-to-haves and generally don’t get done, much in the same way commenting code isn’t a requirement. So what’s to be done? Now that we have the tools for developers to easily build manageability, how do we do it? First, business stakeholders have to see the link between their needs for agility and reliability of apps in the business and the capabilities offered by the management platform. This is the old “an ounce of prevention is worth a pound of cure” adage. Second, development teams need to be incented not only to deliver applications on time but also based on the quality of those applications. This quality metric needs to be extended past the ideas of fixing bugs. Quality has to also be a function of how costly it is to recover from an app failure. Of course, this requires that these costs are tracked as part of a set of key performance indicators (KPIs) managed by IT. Finally, IT operations needs to be “in the room” with development teams early enough in the development lifecycle to provide requirements as well as to understand the nature of the application. In the coming months, I’ll be studying what it costs to manage a “bad app” and a “good app” across the different types of applications out there. In the meantime, what do you think? Does this ring true for you and your organization? Let me know. All the best, Erik
Hi my name is Brett Williams – I’m the focus owner for datacenter and virtualization on the War on Cost team and like my colleagues will be a regular contributor to this Itbizval blog.
For years, organizations have been chasing the holy grail of “ROI on IT investments”.
In general, there are six different types (archetypes) of line of business (LOB) applications prevalent in modern corporations today: Rich Client Applications – Applications of this type are usually developed as stand-alone applications with a graphical user interface that displays data using a range of controls. Rich client applications can be designed for disconnected and occasionally connected scenarios because the applications run on the client machine. Web Applications – Applications of this type typically support connected scenarios and can support different browsers running on a range of operating systems and platforms. Web applications have no client-side scripts or components. Web servers only serve HTML to the client. Rich Internet Applications (RIA) – Applications of this type can be developed to support multiple platforms and multiple browsers, displaying rich media or graphical content providing a higher fidelity of user experience than traditional web applications. Rich Internet applications run in a browser sandbox that restricts access to some devices on the client. Services Applications – The basic goal in this type of application is to achieve loose coupling between the client and the server. Services expose business functionality, and allow clients to access them from local or remote machine. Service operations are called using messages, based on XML schemas, passed over a transport channel. These applications may be part of a service-oriented architecture (SOA) or just a bunch of web services used for specific application solutions. Mobile Applications – Applications of this type can be developed as thin client or rich client applications. Rich client mobile applications can support disconnected or occasionally connected scenarios. Web or thin client applications can support connected scenarios only. The device resources may prove to be a constraint when designing mobile applications. Cloud-based Services/Applications – Applications in this space describe deployed services into either a private or public cloud infrastructure. Many organizations naturally have a combination of most or all of these (maybe not cloud services/applications yet!), some of which are packaged applications, while others are developed in-house. So, here’s the question: are the populations of the different types of applications random or are there patterns to the types of applications determined by some set of drivers? From what I’ve discovered, management concerns drive many of decisions to deploy applications of specific types, namely Web applications. Arguably, these types of applications enjoy low deployment effort and cost compared to other types of applications that require components to be installed on some client device. The services that comprise web applications can also be centrally monitored without having to track utilization, configuration, and failures on client computers for the most part. The downside of these applications is that they tend to have low fidelity of user experience compared to any other type of application. Now, if you’re an optimist, Rich Internet Applications (RIA) have the best of both worlds between providing a rich user experience and having a relatively thin client footprint, in some case relying on a user experience engine such as Microsoft Silverlight . On the other hand, the pessimists out there would claim that RIAs introduce client deployment and management overhead that present significant associated costs. The other types of applications have their own management overhead regarding deployment, configuration, and monitoring. In subsequent posts, I’ll address some of those issues. In the meantime, let me know what you think? Is your organization deploying a preponderance of web applications with a trend toward RIAs? Or, do you have some other type of profile? If so, why? Inquiring minds want to know. All the best, Erik
Successful adoption of Service Oriented Architecture has been a topic of discussion pretty much since the beginning of its introduction to enterprise businesses five or six years ago. As companies have worked to adopt this approach to building more reliable and agile applications at a lower price, they quickly found that it wasn’t as easy as it was to build the monolithic applications of old or even the n-tier and Internet applications that became standard archetypes in the late 1990s. While those types of applications certainly require solid application development, deployment, and management discipline, SOA introduces several new dimensions to the challenge of successfully adopting applications built on this architecture. Applications that rely on SOA have two inherent attributes that do not necessarily exist within applications based on other architectures. First, the architecture promotes the composition of ever-smaller atomic services into higher-order services (a ProcessOrder service, for example, has ValidateCustomer and CheckCredit services associated with it). This “feature” introduces the potential for hundreds or thousands of these compositions to exist in the enterprise (or, in some cases, outside the organization’s firewall; more on that in a subsequent post). Second, services based on SOA are autonomous and stateless by nature (or, at least, they should be!). As such, they can be created and exposed by anyone in the organization as a completely independent island of functionality without being tied to any one specific application. While both of these elements of SOA can certainly benefit enterprises who wish to accelerate the creation of custom, line-of-business applications through the promise of service reuse, the dark side is that services can proliferate out of control. Adopting SOA doesn’t inherently enforce reuse. Corporations may have multiple versions of a “CreditCheck” service created by different departments. Further, these services don’t necessarily have well-defined service levels defined for them that are backed up with enforceable service level agreements. While this may not be a significant problem initially, it can be a serious concern to organizations that want to adopt SOA as a strategic direction within IT. The nature of application development and maintenance changes since the development cycles, feature requirments, and service level requirements become decoupled. While there are many potential impediments to SOA adoption, which I will post more on later, the discipline of governance and the implementation of robust service management is central to long-term sustainability of applications that rely on the services within an SOA. Here’s why: Many organizations view service orientation as overly complex, without enough supporting tools to properly manage service artifacts. There is an inconsistent use of repositories and/or registries for defining service levels, availability requirements, and providing adequate discoverability (for a good description of repositories and registries, see Keith Pijanowski’s article on this. Policy enforcement at run-time in many cases is non-existent or is spotty at best. Good service orientation mandates a consistent approach to data access. This implies that data management, including master data management (MDM), consistent access to structured data (as stored in a DBMS), semi-structured data (as you might find in search service such as SharePoint), and unstructured data (as exists on a file share), must be taken into consideration. Security and privacy must also have a strong focus. Metrics definition and reporting is elusive. While specific services on individual servers can (and often are) monitored, having an all-up view of a related set of services that provide a business view of the health of the application is an altogether different beast to tackle. Production testing and troubleshooting gets harder to implement. As services proliferate, a failure at any point along the way will be harder to pinpoint and diagnose, particularly if the failed service is part of one or more composite services. Subsequent posts will delve into each of these as well as other adoption issues. For now, we’d like to get your feedback on the list above and the subject of service management. What resonates with your experience and what doesn’t? In particular, I’m curious about whether application performance issues represent an impediment to SOA. In my experience, this hasn’t bubbled up as a tier one issue, but I invite you to prove me wrong. All the best, Erik, Application Platform lead, War on Cost Team