Project Server 2010 Webcast – 8:00AM Pacific Time Wednesday March 31st

Don’t miss tomorrow morning’s TechNet Webcast: Managing the Project Life Cycle with Demand Management ! Here’s some details : Language(s): English. Product(s): Microsoft Office Project,Microsoft Project 2010. Audience(s): IT Decision Maker,IT Generalist. Duration: 60 Minutes Start Date: Wednesday, March 31, 2010 8:00 AM Pacific Time (US & Canada) Event Overview Demand Management, a new feature in Microsoft Project Server 2010, captures work proposals in one place and takes them through a multi-stage governance process using a SharePoint workflow model. In this presentation, we provide an overview of Demand Management and its importance in managing project life cycles, and we explain how to configure Demand Management and the required components. Presenter: Rolly Perreaux, Senior EPM Consultant / Instructor, PMO Logistics Inc. Rolly Perreaux is a senior enterprise project management (EPM) consultant and instructor for PMO Logistics Inc., a company that specializes in EPM consulting services and training. Rolly has more than 25 years business experience and holds various designations from the Project Management Institute (PMP), Microsoft, Compaq, IBM, CheckPoint, and CompTIA, and he has just been awarded a Microsoft Most Valuable Professional (MVP) for Microsoft Project. Rolly’s detailed dossier can be viewed at www.pmologistics.com/bio/rollyperreaux.htm , and he frequently blogs at http://rperreaux.spaces.live.com . View other sessions from Microsoft Project: Align People, Work, and Priorities If you have questions or feedback, contact us .

“Tier 1” Apps are Special…But At What Cost?

Greetings everyone, and happy-almost-spring. Lately I’ve been focusing on understanding the costs (in all its forms) of delivering and managing Tier 1 applications. For the sake of discussion, let’s define a “Tier 1” app in terms of reliability or “quality of service”. In a previous post , I discussed four tiers of criticality for enterprise applications. While you may disagree with some of the names, there should be little argument that there are classes of applications that are truly critical to the success or failure of a business; the so-called “mission critical” application. These apps are also referred more generically to be called “Tier 1 applications”. These applications hold a special level of importance in the corporate enterprise because their failure (measured in terms of reduced service quality or complete outages) would have a profound effect on the business including any or all of the following: Widespread business stoppage with significant revenue impact Risk to human health/environment Public, wide-spread damage to organization’s reputation Company-wide productivity is compromised Examples of these types of applications are eCommerce (amazon.com, ebay, etc.), 911 response systems, stock and commodity trading systems, and airline reservation systems (some would also put CRM and corporate email into this group too). ( Note : while some have referred to Tier 1 apps with examples such as Exchange, SharePoint, SQL, Oracle, DB2, etc., I claim they are missing the point. With the possible exception of SharePoint, these other examples support the application and need to be treated as part of the overall solution, not as the solution itself.) It’s obvious that these applications are important to the business for the reasons listed above as well as others. They represent a significant importance to the business when they run well and a huge impact to the business when they don’t. Tier 1 Apps Put Quality and GRC Ahead of Cost What I find interesting, however, is that these apps hold a special place in the minds (and the wallets) of business and IT leaders. Despite the IT maturity of an organization, companies will “invest” whatever it takes to keep these applications up and running with the highest levels of quality expected of their customers. Even for organizations that do not have a culture of IT maturity improvement, Tier 1 apps will always enjoy financial and human resource availability to ensure those apps remain highly available. While the driver for most applications in the organization (60%-80%) is cost (delivery and ongoing maintenance), Quality of Service (QoS) and Governance, Risk Management, and Compliance are foremost. The implications of this can be profound, particularly for companies that do not have a practice of IT maturity improvement. Highly mature organizations imbue the practices of service delivery with high quality, high compliance, and low risk across the entire portfolio of their service catalog, without incurring the huge costs of maintenance. Less mature organizations, on the other hand, will tend to be reactive in nature and waste resources to ensure these Tier 1 applications remain healthy. The costs incurred can come from many sources such as expensive consulting resources, inefficient, time-consuming processes, and an over-reliance on expensive technologies. In short, these organizations will throw whatever is necessary at a Tier 1 app to keep it up and running to meet any explicit or implicit quality and compliance bars exist. What to Do? Learn The Lessons from Tier 1 App Delivery and Management Regardless of how you define it, every business of any appreciable size has Tier 1 applications. Unfortunately, many IT organizations do not have a very high level of IT maturity and yet, these Tier 1 apps demand it. As a result of this gap, significant wasteful costs are incurred to keep them up and running. Where there are pockets of good, mature IT practices, they probably exist within the realm of Tier 1 service delivery. Unfortunately, what also exists is REALLY bad process too, all in service of maintaining high quality of service. In my role as an enterprise consultant for many years I’ve seen countless “all hands on deck” events when a Tier 1 app went down. There was a mad scramble to restore service, all the while work on other important IT functions was put aside. For that reason, IT organizations should look at the mature practices and policies they do have have (many of which will be implied) for their Tier 1 apps and see how to apply them across their IT service portfolio, but not simply because it’s “good practice.” The organization needs to also take a hard look at recent emergency situations as much to understand what cost is being incurred to restore service as to understand how to minimize their occurrences. By using the lessons learned from their Tier 1 app efforts (both the good and bad), IT organizations will reduce their overall delivery and operating costs by becoming more efficient in the deliver of IT services through such activities as: Rationalizing the costs of high availability Reducing the reliance on expensive consulting and support resources Becoming smarter and more targeted about information security Designing apps with the right level of service (how many “9s” are needed?) Resolving incidents more quickly with appropriate service monitoring Conclusion The overall message is simple and taken from an old adage: an ounce of prevention is worth a pound of cure. IT organizations and the businesses they support will lower their overall delivery and operations costs when they look to the best practices learned from the delivery and maintenance of their Tier 1 apps and apply them generally across their organization. All the Best, Erik Svenson, Application Platform Lead, War on Cost Team

Application Criticality

I recently completed a study of over 500 US-based customers in which I wanted to understand: what types of applications are being delivered and managed generally, how much does it cost to deliver and manage those applications how are web services being used how well are cloud computing solutions being adopted In my first post about what I learned from this investigation, I wanted to first share my thoughts about the concept of “application criticality”. We’ve all heard the term “line of business application”. While there are many specific definitions of what an LOB app is, a generally accepted definition is: An application that is vital to running a business This extremely vague term, while generally descriptive, doesn’t get down to the level of specificity needed to understand how one LOB app compares to another in importance, scope, or complexity. In order to better understand the population of applications being delivered and managed, I needed to get to a lower level of granularity. Our team therefore defined the concept of Application Criticality to better distinguish line of business apps from each other in terms of their importance to the business as well as their relative scope of influence on the business. Below are the 4 levels of criticality and their definitions. The descriptions of these levels of criticality are defined in terms of the impact on the business if these applications become unavailable. While there may be other ways to define these classes of criticality, we find it useful to refer to them in terms that line of business owners typically care about. Criticality Level Failures of applications in this class can result in: Mission Critical

IT Managers: What do you expect when developers aren't incented to build manageable applications?

Greetings everyone. I’m Erik Svenson , a biz value strategist in the System Center team at Microsoft and I focus on application platform management costs. I posted a couple weeks ago about the different types of apps out there (web, RIA, rich client, etc.). In all cases, they suffer from one common, cultural issue that plagues the consistent management of enterprise applications: developers aren’t paid to design and build applications with manageability in mind. In my 25 years in the industry, I’ve consistently gotten the message that building manageability into the application is, at best, an afterthought. When I was a developer back in the mid ’80s and early ’90s, the only thought we had around the management aspect of an app was to put in somewhat meaningful messages when an error occurred. There were no conversations with IT about how the app should perform or even a document produced about what the application did. Nope. We just tested it and pushed it out to IT with a request for the right amount of disk and processing capacity (“right” being defined by us developers, by the way!). Part of that was due to the fact that there were no management tools out there and have only become mainstream in data centers in the past fifteen years or so for the distributed computing platform. Now, there’s no excuse. With System Center Operations Manager (SCOM) , WMI and the .NET Framework, we have a rich platform to easily build management capability into applications through custom alerts that are fed into Ops Manager (or any WMI consumer) as well as custom management packs. This is all wrapped in a strategic bow we call the Dynamic Systems Initiative also known as “Dynamic IT”. And yet, few developers do this at all or, it’s an afterthought. Why? Well, I think it’s roots are primarily cultural supported by a lack of incentives. Developers simply aren’t paid to build proactive management capabilities into their applications. Even though it may take just a few lines of C# to do build an alert these days, in the crush of trying to get an app out the door, these tasks are considered nice-to-haves and generally don’t get done, much in the same way commenting code isn’t a requirement. So what’s to be done? Now that we have the tools for developers to easily build manageability, how do we do it? First, business stakeholders have to see the link between their needs for agility and reliability of apps in the business and the capabilities offered by the management platform. This is the old “an ounce of prevention is worth a pound of cure” adage. Second, development teams need to be incented not only to deliver applications on time but also based on the quality of those applications. This quality metric needs to be extended past the ideas of fixing bugs. Quality has to also be a function of how costly it is to recover from an app failure. Of course, this requires that these costs are tracked as part of a set of key performance indicators (KPIs) managed by IT. Finally, IT operations needs to be “in the room” with development teams early enough in the development lifecycle to provide requirements as well as to understand the nature of the application. In the coming months, I’ll be studying what it costs to manage a “bad app” and a “good app” across the different types of applications out there. In the meantime, what do you think? Does this ring true for you and your organization? Let me know. All the best, Erik

Application Archetypes and Management

In general, there are six different types (archetypes) of line of business (LOB) applications prevalent in modern corporations today: Rich Client Applications – Applications of this type are usually developed as stand-alone applications with a graphical user interface that displays data using a range of controls. Rich client applications can be designed for disconnected and occasionally connected scenarios because the applications run on the client machine. Web Applications – Applications of this type typically support connected scenarios and can support different browsers running on a range of operating systems and platforms. Web applications have no client-side scripts or components. Web servers only serve HTML to the client. Rich Internet Applications (RIA) – Applications of this type can be developed to support multiple platforms and multiple browsers, displaying rich media or graphical content providing a higher fidelity of user experience than traditional web applications. Rich Internet applications run in a browser sandbox that restricts access to some devices on the client. Services Applications – The basic goal in this type of application is to achieve loose coupling between the client and the server. Services expose business functionality, and allow clients to access them from local or remote machine. Service operations are called using messages, based on XML schemas, passed over a transport channel. These applications may be part of a service-oriented architecture (SOA) or just a bunch of web services used for specific application solutions. Mobile Applications – Applications of this type can be developed as thin client or rich client applications. Rich client mobile applications can support disconnected or occasionally connected scenarios. Web or thin client applications can support connected scenarios only. The device resources may prove to be a constraint when designing mobile applications. Cloud-based Services/Applications – Applications in this space describe deployed services into either a private or public cloud infrastructure. Many organizations naturally have a combination of most or all of these (maybe not cloud services/applications yet!), some of which are packaged applications, while others are developed in-house. So, here’s the question: are the populations of the different types of applications random or are there patterns to the types of applications determined by some set of drivers? From what I’ve discovered, management concerns drive many of decisions to deploy applications of specific types, namely Web applications. Arguably, these types of applications enjoy low deployment effort and cost compared to other types of applications that require components to be installed on some client device. The services that comprise web applications can also be centrally monitored without having to track utilization, configuration, and failures on client computers for the most part. The downside of these applications is that they tend to have low fidelity of user experience compared to any other type of application. Now, if you’re an optimist, Rich Internet Applications (RIA) have the best of both worlds between providing a rich user experience and having a relatively thin client footprint, in some case relying on a user experience engine such as Microsoft Silverlight . On the other hand, the pessimists out there would claim that RIAs introduce client deployment and management overhead that present significant associated costs. The other types of applications have their own management overhead regarding deployment, configuration, and monitoring. In subsequent posts, I’ll address some of those issues. In the meantime, let me know what you think? Is your organization deploying a preponderance of web applications with a trend toward RIAs? Or, do you have some other type of profile? If so, why? Inquiring minds want to know. All the best, Erik

Deployment Practices: Why Resource Max Units Should Never Be 100%

Reason number one is that in all but the most extraordinary situations it is modeling a situation that is just not possible. But first some background on what Max Units really is. Max Units defines the percentage of a resource’s full calendar working “period” that they can be assigned to work on tasks before Project sees them as being over-allocated. Example: A resource’s calendar says they come in at 8am and work until 5pm and take a 1 hour lunch. They do this Monday – Friday. That is an 8 hour work day40 hour work week. So if the Max Units is 100% then if the resource is assigned to work 9 hours in one day they will be seen as over-allocated. Same for if they are assigned to work on 2, 1 hour tasks during the same hour. This goes down to the minute level too so if two 1 hour tasks overlap by 1 minute then for that 1 minute they are over-allocated. This helps the PM create models of assignments and get an idea of how many hours each team member is being assigned to tasks and how that falls across time, other assignments, etc. So now you might be seeing what is wrong with 100% Max Units. It says that if I work an 8 hour day, I am available to work 8 hours on tasks. On it’s face this sounds logical but dive a little deeper and it becomes obvious that this is just not possible. Nobody ever arrived at work at 8am, took a 1 hour lunch, and then left promptly at 5pm AND got 8 hours of work done on tasks. EVER. OK wait, I take it back. It is possible that someone did this on your project:  IF your project schedule has tasks for things like: going to the bathroom, answering non-project related emails, going to a company meeting, being tapped on the shoulder by your cube-neighbor and being asked for “just a quick 5 mins. of help” (that turned into 30 mins), the list goes on and on. So if your project contains a task for every possible distraction from YOUR project and you expect your resources to track all of that then never mind. You can set your Max Units to 100%. (just count on a lot of churn on your team.) But for most of us it is not possible to work 8 full hours on PROJECT WORK in an 8 hour day. Doing so means that you were present for more than 8 hours so that all the other things had time in your day along side your real work. The best way to help our models (because that what project schedules really are: models of what we want our project work to look like) be more accurate is to lower Max Units to something more like 85%. That would be the highest I would ever go on any project I was managing. I have seen it set as low as 75% at some sites but generally I see 80-85%. What this means is that if you have a 1 day duration task and you assign a resource that has an 85% Max Units value, Project will calculate the Work for that task to be 6.8 hours. This means that you are modeling that on average this resource spends 1.2 hours of their 8  hour day doing something OTHER THAN working on your project. A Max Units value of 75% means that 2 hours is spent doing other things. Of course some will get more than 6.8 done in a day and some will get less done. It depends on the nature of their job, their relationship with other projects, other teams, etc. So the value you set will never be perfectly accurate but it WILL certainly be MORE accurate than 100% which is nearly always wrong. The point here is to make your model as accurate as you can.   ____________________________________________________________ I’m hoping to start a small series of Deployment Practices posts here covering things I have found to be useful ideas, practices or methods for deploying Project Server. Please email me if you have suggestions or questions. Technorati Tags: Project Server , Resource Management , EPM