What is a vanilla implementation?

It’s doesn’t matter what type of software you’re implementing or for what purpose, Einstein’s quote is the guide to success; finding the balance point of simple is the key to a vanilla implementation.

Software is created to perform a function, typically a function that many people do on a regular basis. The product designers spend lots of time understanding the commonalties of the function in order to create a product that follows Pareto’s Principle or the 80-20 Rule. The 80% is the same for everyone; it’s standard/vanilla and is going to work just fine for your purposes. It’s how you handle the 20% that determines how vanilla your implementation will be.

From the software application perspective, here are three rules of keeping things vanilla.

Rule #1: Extend – never replace. If the application does it, use it. If shipping capabilities exist within the software then use it. Don’t reinvent the wheel; extend from the hub to add the uniqueness you require. That’s kind of your own 80-20 rule; you can use 80% of the functionality of the code that’s there and just add on your own 20%. The total cost of ownership for the 20% is a lot less expensive than 100% of your own development and support. Plus, you’ve already paid for the capability (and continue to pay support) so you certainly don’t want to pay twice! The question to ask in regard to Rule #1: Why doesn’t the standard capability work for my company? 99 times out of 100, there is just one specific situation where you need something a little bit different.

Rule #2: Extend – never change. Never modify standard application code or data structures. I have always told my teams: “Tell me the 25 other ways you’ve considered before saying you have to modify standard application code”. Almost every software manufacturer provides a method to extend the functional capabilities of their package. You may need to do some unique calculation or capture some additional data. Use the hooks provided to interrupt the flow of the process, do what you need to do, then come right back in where you left off in the application flow. This is what allows you to continue to upgrade as new versions of the software are released. In my career (many, many moons), there have only been two times when a standard code change was required. And believe me – everyone who ever worked with that section of the application knew about it and how to deal with it, talk about documentation!

Rule #3: Data drives flexibility and scalability. Every software application is configured to distinguish Company A from Company B. The data values that drive the function you are automating is what makes your company yours and allows you to extend the functionality to meet the unique needs of your organization. For example, imagine you are a company that provides financing directly to some of your best customers. Only 1% of the time is this option selected. By creating a unique value, “Financed” associated to the quote and order, you can create a hook when that value appears to capture additional data and determine differentiated pricing, for instance. You don’t change the way quotes and orders are entered or processed, you just add to it and perhaps do some overrides. If you decide later you’re not going to do that anymore or do it in a different way, then inactivate the “Financed” value or add a new value to drive behavior differently. You’ve handled the 1% and left the 99% as business as usual: vanilla.

If you follow these three rules you use what is there and have not modified the code or data structures as provided by the software manufacturer; you have a vanilla implementation. You added your 20% of uniqueness in a way that maintains the integrity of the supplied software. The ease of doing this varies based on the tools associated with the specific software application; some suppliers force the behavior, with others you have to be more diligent.

Of course, there is a bit more to it than that. How you implement in a vanilla fashion is one thing, the bigger measure of the “vanilla-ness” of your implementation is what you choose to put into the 20%. As I described in the rules, this is a question of uniqueness and value; unless there is something to differentiate you from your competitors, why spend the money? How do you know what is truly unique and differentiating about your company; where is the balance point for simple as described by Einstein? Software can do anything you can imagine but you probably can’t afford for it to do everything.  I always like to say “Just because you can, doesn’t mean you should”.

Start by understanding the best practices for the function and in your industry (the 80%) and then state specifically (in writing) what is really special about your company in these areas (the 20%). Be aware if anyone says, “We’ve always done this, we have to have it!” For every unique factor, describe how this differentiates you from your competitors and what benefits are derived. Don’t forget about reporting; you will look at your data differently from others and you may need to feed your business intelligence applications for a more holistic look. Based on this, have a rating and ranking exercise performed by the program sponsors and executive leadership. This sets you up to find the balance point of simple for your implementation.

Vanilla now becomes a budget decision. Figure costs for environments, configuration, testing and change management, that’s your baseline. Determine rough estimates for the ranked list (only go as far as you think necessary). From there you can determine the cut line based on the budget for the program (or adjust your budget accordingly). Voila, vanilla!

There are three simple rules to follow for a vanilla implementation: never replace, never change and drive with data. The harder part is the choice of where to differentiate your implementation and your company. Vanilla is permission to play; the balance of simple is winning.

© Ellen Terwilliger 2012

Advertisements

Is the fox watching the hen house on your program?

If your company is undertaking a major cross functional transformation program, then you have probably engaged a partner (or two or three) to help you. There are many reasons for needing a partner; not enough in house resources, knowledge gaps, etc. One of two things can happen: either you run the partner or the partner runs you. How do you know which situation you’re in?

Often you do not possess the in house expertise to lead and manage a change of this breadth and impact. It’s understandable; you’ve just never had the experience before. To remedy that you can either hire internally or ask a partner to provide this service for you. Hiring internally for a program of a fixed duration can be a challenge; although in my experience there’s always another program on the horizon, it’s just a question of when it’s actually going to start! Frequently you ask your partner to provide the program management services and support for your leadership. Here’s where it can get tricky.

If the program management expertise is coming from the same partner and the same practice group, this is a red flag for me. Partners know how to run their methodology and that’s what you’re going to get; they will be very adept at doing their work the way they know how to do it, but they may not be as effective in helping you to do the work you need to get done or calling you on it if things aren’t getting done. This is what I call the fox watching the hen house. That may be okay if your organization is mature, doesn’t work in silos and has a high degree of trust between all cross functional stakeholders. Otherwise, you may experience delays in your program due to inconsistent expectations, slow decision making and reactive risk responses.

If you have a fox ready to eat your hens, you may want an insurance policy or what a CFO I worked with called program assurance or independent audit (which your own legal department should be doing too). The CFO discovered this is not as easy to find as it sounds, although most of the big consulting firms have a practice. What are you really looking for here? You want to insure that the program is going to be successful; that the work is getting done and if not have an intervention. First and foremost, I believe you need someone who isn’t afraid to tell you and your partners what’s working and what’s not working.  In addition, they should ensure you are proactively mitigating risk (not just managing it).  I also believe that identifying assumptions in order to manage expectations about your program is another service this third party can perform that your partners who are busy delivering may not think about (at least that has been my experience).  This independent party is watching your back and looking out for the big “C” in your Company.

What choices do you have? You can use the assurance practice from your existing partner. In a large program, this should be happening anyway; preferably as part of your contract with minimal additional cost since it’s supposedly good for the partner as well as good for you. In this case, the practice, while independent, still has the same basis in the underlying methodology; you might only get a better dressed fox. I look for a different point of view. The assurance practice of a different partner or an independent third party with large implementation experience is a good choice. This way there is no connection to the dollars per hour interest of the partner doing the heavy lifting and no preconceived notions of how things ought to work.

Agree on the measures that tell you this third party is keeping you on track such as mitigation plans being in place and executed accordingly, decisions being made at the right time/level and milestones being reached according to plan; even base a portion their fees on those measures. As a colleague and I discussed, there are no insurance policies for large scale transformational change programs. Using a third party to tell it like it is allows you to keep on top of all the partners and your own resources and actions. They set you up to run your partners as opposed to having them run you.

As the saying goes, an ounce of prevention is worth a pound of cure.  You can take action toward insuring you are in the successful 40% of transformation programs.

Hens produce a lot of value over the long term. Make sure yours are protected from the foxes.

© Ellen Terwilliger 2012

Cloud Computing and the Demise of IT and the CIO

I have been reading a lot about the demise of the corporate IT department and the CIO as a result of cloud computing. Since everyone can “do it themselves”, why would you need IT? Admittedly I may be a bit biased having spent my career running applications organizations in IT (I appreciate infrastructure but I love apps!), but here are some thoughts on why you might want to keep IT and the CIO around for a bit longer.

IT is a service organization. It only exists to be useful to other departments in the company; IT really has no reason to do anything for itself. Project requests don’t come from IT; they come from other places in the organization. IT may create some projects, but they are done in order to keep something running for someone else (capacity, supportability, etc.). The CIO and the IT department have no reason to prefer one organization over another; from my point of view IT should always be looking at what is best for the company as a whole (the big “C” in your Company versus all the little “c”s). That is kind of a unique point of view within your organization if you think about it. Maybe only the CEO has that same broad perspective.

The CIO and IT has accountabilities across all lines of business and business functions to deliver not only shared technology, but integrated technologies as well. Notice I said shared (and what really isn’t shared) and integrated, because those are key reasons for IT and the CIO to exist.

For technology that is commonly shared across the company, like email or file storage for instance, you certainly don’t want every department out choosing their own cloud solution; it’s going to be difficult to even guess the total cost of ownership for the service in that scenario, so who knows if you’re saving money or not. What about the policies that should exist to manage email and file retention? Legal may broker the agreement across the organization but they are not likely to enforce the policies across all the applications that should be addressed, that is the IT organizations job (another reason why people don’t like IT too much). Storage may be cheap but it isn’t free and housekeeping/compliance is good hygiene.

Then there is the functionality that allows your business to run: from marketing and sales, through order processing and fulfillment into technical support and back around again. Who’s accountable to make sure that these applications integrate together if everyone is out to “do it themselves”? Sure there are some “fit for function” capabilities that don’t impact other functions in any way, but I’ll bet they still need some data interfaced into them to work (names and IDs for example). Of course, everyone can enter this data independently into each system, but I think you all know where that would lead – inefficiencies and inconsistencies, the reason automation and IT exist in the first place.

The other thing I worry about, in using multiple SaaS solutions especially, is what I call the “choppiness” of the user experience. If a lot of independent applications or multiple applications that serve the same purpose are used to support one individual person’s workflow, what is their experience like; are they having to wait for data to interface (oops there’s that integrated need again) or doing duplicate data entry, do they have to deal with many different “looks and feels” to get their job done, or even worse the confusion of the same piece of data being called different things in different places. The impacts on enterprise reporting can be significant too; lots of different sources of data. I believe the role of IT and the CIO is to ensure the quality of the user experience and to help drive consistency across the organization where appropriate (remember that unique, broad point of view).

What about the contracts and SLAs to be managed? There are certain pitfalls to look out for in negotiating with and managing your XaaS providers. How will you ensure that the right things get considered and who do you want to do this? Perhaps your purchasing department is a good candidate, but how do they know what to look for in technology service contracts? They could get a third party with the knowledge to watch their back (more suppliers with their fingers in the pie) or perhaps you just get burned and you learn.  Who keeps score on the vendors, regularly reviewing their performance? Why not allow IT and the CIO to ensure this is done knowledgably and consistently for your organization.

Finally, what about when something goes wrong? Someone still has to support the use of these cloud solutions. The provider does some of this based on those contractual SLAs mentioned earlier, but what about the configurations that are specific to your business; they know how their platform works but they don’t necessarily know your business. I guarantee there will be times when the cloud provider tells you it’s not their problem, everything is working correctly; but you still have an unexpected outcome. So somewhere you still need to have people within your organization that can figure out specifically what happened in your scenario and make sure it doesn’t happen again. That’s what IT does (stop the bleeding and then do root cause analysis) while you continue to run the business.

Have you heard the expression “where the rubber meets the road”?  

Every person in every company interacts with technology on a daily basis; it is the road that your organization runs on. I know I prefer someone else to build the roads and do the maintenance; I just like to drive (fast). In the cloud (or not), I think the CIO and IT will continue to have a part to play in your business.

© Ellen Terwilliger 2012

Whatever can go wrong will go wrong – but you can be ready!

In the first week of my brand new career in IT (many, many moons ago) when I wiped out an entire disk drive while performing a backup (I followed the process exactly), through my first implementation (also many, many moons ago) when the physical back plane burned during go live and had to be replaced, to my most recent large scale implementation when the power failed in the command center in the middle of the production data conversion – I have been a victim of Murphy’s Law. On one program a colleague even gave me a framed lithograph of Murphy’s Law engraved on tablets (shown above) since it seemed we were dealing with some sort of disaster on a daily basis! Even today, I probably wouldn’t document any of these specific examples as a risk to my program (since the probability is low of reoccurrence in my lifetime); but I would certainly think about them!

Over the years, I have learned the value of proactively identifying the unique risks that could possibly befall my program (because Murphy never sleeps) so mitigation plans can be developed for those that could derail success. This is especially important for large scale transformational change programs where the opportunity for disaster to strike is more prevalent, just because there are more moving parts. By being prepared in advance, you can often prevent the issue from arising at all (An ounce of prevention is worth a pound of cure!) or at least have a plan in place and avoid panic.

The earlier you begin risk management and mitigation (notice I didn’t stop at just management, you will have to act too) the healthier your program will be and the more likely you are to be successful (remember according to McKinsey less than 40% of transformation programs succeed). In identifying risks, you need a few key pieces of data to start with: risk event/statement, impact statement, impact level of the risk and the probability that the event will occur.

So how do you get started with proactive risk management? I begin with identifying assumptions (Assumptions – Power for Programs). Conflicting and invalid assumptions are primary candidates to create risk although any assumption can lead to risk.

Here’s an example. In an 18 month program, my program manager assumed an eight hour workday in her master project plan (not too unusual, although we all know people often work more than an eight hour day, especially on major programs). Our system integration partner assumed a nine hour workday. Oops, conflicting assumptions! This created a couple of high impact/high probability risks as follows:

• Budget risk: $260,000 in unexpected cost (1 hour per day @ $25/hour for 40 developers for 12 months)

• Schedule risk: missed deadlines (10,400 hour discrepancy)

Since this was identified within the first few weeks of the program, we were able to prevent the risk from occurring by choosing a valid planning assumption and adjusting accordingly: no budget overrun and no missed deadlines (at least not for that reason).

There are different types of risks and many ways to approach risk mitigation. In the example above, we chose to eliminate the risk since it was straightforward to validate one assumption, invalidate the other and act accordingly. This isn’t always possible, so you can choose to minimize the impact of the risk (create a risk mitigation plan) or accept the risk (deal with the consequences should the event occur). A first step toward determining the mitigation approach is to score the risk impact and probability to identify the risks most likely to occur that will have a significant impact on your program. For those that rise to the top, you can take immediate action; eliminating or minimizing the impact. Through routine risk management you will be able to choose your approach for all the risks you identify leading to a greater chance of success for your program.

You can be ready for Murphy when he comes to visit your program!

© Ellen Terwilliger 2012

Assumptions – Power for Programs

The most powerful tool I use in leading transformational change programs is assumption management. Over the years, I have contracted with hundreds of project managers and most of the leading system integrators; none of them actively used assumptions. Maybe you are wondering what I am even talking about.

One place assumption management has been discussed is in regards to software development and defects by The Software Engineering Institute (SEI) at Carnegie Mellon. The article states that most of the defects discovered in existing systems were probably caused by invalid or changed assumptions. I believe major programs fail, or experience huge cost overruns , for the same reason. Many programs don’t even consider what assumptions are out there. Project management methodologies refer to assumptions and there are even a few companies that have based an offering on assumptions and risk management, but very few programs actively use assumption management as a communication and alignment tool throughout the project lifecycle.

First of all, what are assumptions? Merriam Webster says an assumption is a fact or statement (as a proposition, axiom, postulate, or notion) taken for granted. No matter how many people are involved at whatever stage of your program, everyone is making assumptions and taking for granted that something is in fact true. And I can guarantee you – assumptions are being made that are mutually exclusive; they conflict with each other. Why does that happen? Diversity is a key aspect of transformation teams, individuals are selected to provide broad perspectives and different points of view, so it is natural to see things differently and therefore assume different things. Assumptions are not right or wrong; they are just valid or invalid within the context and timeframe of your specific program. The power in that is that people don’t need to fear assumptions.

What do you do about assumptions, especially those that conflict? The first thing is to identify and document assumptions. If you don’t know what they are, at each stage of your program, you certainly can’t do anything about them. Depending on the stage and size of the program there are different ways to collect assumptions; interviews and questionnaires being the most common methods. I then use a matrix to categorize and de-duplicate the assumptions. The matrix provides the framework for the assumption validation meeting; where the cross functional team rapidly reviews every assumption for validity and risk potential (more on that in another blog) with special focus on conflicting assumptions. If conflicts cannot rapidly be resolved then they are set aside for deeper review and validation. You’ll find that a lot of other assumptions come out during the validation meeting allowing for an even deeper understanding of the program. This type of meeting is one of the most effective communication and alignment tools I’ve used; people are open, engaged and animated; they come away with a real sense of ownership because they are involved. All this increases the probability of success for your program.

Remember that assumptions are vulnerable; they can change or become invalid over time. That is why periodic review and refresh is essential for longer duration programs. I recommend stage gates and/or testing events as opportune times to collect, review and revalidate assumptions or incorporate assumptions as part of risk management. After all, an ounce of prevention is worth a pound of cure.

Can you see the power of using assumption management in your programs?

 © Ellen Terwilliger 2012

Get to the cloud (and save money)!

How many C-level executives have heard or uttered this phrase in the last week; for that matter in the last 24 hours? And how many of them really know what they are asking for?

The National Institute of Standards and Technology gives a good high level definition of the cloud. Is cloud computing really that new? As with most things, life is circular (what goes around comes around), so maybe the technology specifics are new but not the concept. Anyway, the definition states that the cloud will save money (hence the utterings in the c-suite). Cloud vendors (and what hardware or software supplier isn’t a cloud provider today?) continually ensure that the cloud will save you money; so it must be true, right? But will it really? Like all things, the answer is – it depends (don’t you hate that!).

There are great reasons for using Infrastructure as a Service (IaaS) or Platform as a Service (PaaS). I think particularly in R&D situations where you have varying compute needs versus having dedicated hardware to meet your highest capacity compute requirements. Crunching big data is probably another area where the cloud could save money. In both scenarios I believe you need data retention policies that are enforced (probably true in or out of the cloud) to ensure your variable storage requirements and associated costs don’t just escalate. You have cost in set up and ensuring you continue to get what you pay for over time but you are relived of the day-to-day operational costs and pay for the benefits. I am sure you have other scenarios where you can save money using compute services in the cloud.

As indicated in an earlier blog, I love applications; I appreciate infrastructure (IaaS and PaaS) but it’s really just a vehicle to run applications (I am not biased or anything)! So I have a business operations application centric view when looking at the cloud. Software as a Service (SaaS) is the predominant cloud choice (or whatever aaS you want to call the on-demand flavor) for running your business. I am not convinced that the cloud is always a money saver in this situation.

The Upper Edge LLC blog has an interesting series about SaaS and On Demand contracts and whether there is really a lot of difference in your cost commitments, SaaS Matters: When “On Demand” Really Isn’t.  For you finance people in the crowd (or is that cloud) how do you feel about capital expense (CapEx) versus operational expense (OpEx)? I am such a conservative that predictable CapEx depreciation makes me feel good. Although from the Upper Edge point of view, maybe you get predictable OpEx using cloud SaaS options. Variable expense is something you should consider as you think about getting to the cloud and saving money.

Another cloud cost consideration from a business operations perspective is integration. Certainly in the early stages of your company, getting your business up and running is a great economical use of SaaS. And fit for function SaaS offerings which are best of breed in doing one thing really well are great choices too. But get too many independent SaaS offerings that need to work together and your cost savings may begin to erode or even disappear. There is not only the cost of integrating the applications but productivity must be considered also. If a single user has to go to multiple applications to complete their work and the configurations and integrations do not provide the necessary consistency, productivity could very well take a dive. The cost of supporting your business users in your unique cloud environment will also be affected by the consistency of integrations.

The cloud and saving money may often but not always be synonymous; it depends. So be cautious if cost savings are your only reason to “Get to the Cloud!”

© Ellen Terwilliger 2012

Why. Now What?

According to Steven Covey’s 7 Habits of Highly Effective People habit #2, you should start with the end in mind. While he’s talking about individuals, I think this is applies equally well to transformation programs. (Maybe all seven habits are applicable; I’ll have to check that out.) Anyway as they say in the army, you can’t hit a target you can’t see.

In an earlier blog Why Transform , we used an example to show that a picture is worth a thousand words to manifest your vision and allow people to internalize why to transform.

 

This is a great start, people can see what success looks like, staring with the end in mind like Steven Covey suggests. But this alone is not always enough to completely inform and empower people to do the right things and make the right decisions during your journey of change. I believe that the next step is to describe solutions that show what is required to get to your desired outcomes.

In the example, there are spaces that need to be filled in to allow us to get to that happy productive salesrep. Take a look at the four solution spaces described below and we’ll talk about each of them:

To realize the vision of One Cloud One Me, first you need to know “Who I am”. A lot of companies do a pretty good job of this; often there is a unique identifier and strong password that is proliferated around the many application systems allowing a single sign on (or in some cases sign on and on and on as my former colleague Leslie Thiel has been heard to say). So you might not need to do a lot of work on this space.

“What I do” can be a little trickier. Since you’re a very successful startup, it’s likely that these SaaS applications were purchased and implemented at different times as the need arose. It may also be the case that some applications were implemented by business groups and some by IT (Does that sound familiar to you IT people in the crowd?), heck maybe IT didn’t even exist at the company when some of these went live. In any case, what I do or my role may not be consistent between the applications. This is pretty straight forward to figure out in order to ensure that your various types of sales people (salesreps, territory managers, etc.) have the appropriate capabilities to do their job.

Now comes some more interesting stuff (remember interesting is the word that is used to mean that complexity and cost are staring to go up). “What I see” may be very different between the various clouds. For example, sales team may play a huge role in the opportunities I see in my sales force automation system (SFA: Salesforce.com in the example) but sales team doesn’t exist in my enterprise resource planning system (ERP: NetSuite in the example), so unless I am the actual salesrep assigned to the deal, I may not be able to see when an order gets booked for the deal I sweated over. And therefore I’m worried I’m not going to get compensated properly by the incentive compensation or commissions system (IC: Callidus in the example) since that’s in some other cloud formation in my mind. Now salesrep productivity begins to erode because time is being spent “checking things out” instead of selling. This is the real opportunity to be addressed.

So what do we do about it? Here’s where the last piece of describing the solution comes in; critical data leverage. By describing what data objects will be used to connect the spaces across all the clouds that are relevant to me, I don’t have to worry about consistency anymore. Everyone can see what will be used to guarantee that every application I use to do my job has the right stuff. People will be able to do the right things and make the right decisions to ensure the desired outcome: one me. The values of the data objects may change over time as territories are divided up differently or salesreps move around the world, but the data object will stay the same and be consistent (until the next transformation anyway).

In the example, let’s say territory, sales team, customer classification and product family are used to determine what I can see and how I am compensated. So we add those data objects to the solution spaces to complete the picture.

The last solution space “How I know”, just became a bit easier to handle. By describing the solution spaces and critical data to be leveraged, everyone has a clear picture of what will be used to connect the clouds consistently and provide the ability to validate or reconcile One Cloud One Me.

Starting with the end in mind and describing the solution creates the guardrails for your transformation program. Do you think people will be able to accurately create the processes and designs that will enable your desired outcomes? Will decision making be easier and more rapid? Will your transformation program be set up for success?

© Ellen Terwilliger 2012