I’ve got a question about how others do estimating and then use those estimates in their portfolio planning process. If you estimate the cost and a finish date on an IT project (or really any project for that matter) and over the course of working the project, conditions change where the estimates are no longer valid, how do you handle the change? Do you revise estimates along the way and then have the resulting situation where the final product ‘meets’ your new revised estimates? Or do you hold yourself accountable to your original estimates from the very beginning?
If a group is serious about trying to get the estimates right in the first place and then measuring accuracy over the course of months, quarters and years, how you answer this question is very important. It seems that you must hold yourself accountable to your first estimates because they are the ones that were used to make a go/nogo decision in the first place. Furthermore, you want to setup a closed loop process where you try to shrink the gaps over time and minimize errors as you get better at estimating.
This breaks down when events beyond the control of the team affect the project. Perhaps an unexpected M&A effort effects everything or changes in leadership dramatically change priorities on the portfolio. In situations like these, the original estimates can be meaningless.
Also, you are learning things as you go along. Conditions change, priorities change, people leave, etc. Somehow you need to account for these learnings along the way so you can always provide the best estimate possible as your progress through the project.
I saw a recent report that talked about the need for IT shops to get better at estimating. Seems like I need to get a good answer on this question first.
I’d be curious how other address this?
OK, I’m going to shamelessly link to one of my own companies blogs because the post and embedded YouTube video about a data center disaster needs to be further spread. I don’t know if this is an old or new incident (didn’t look up dates) but it is quite telling and certainly not funny from a CIO perspective. However, the video might be of use to someone trying to make a point about the need for real disaster recovery planning and business continuity planning.
A year ago, I wrote on my internal company blog about why we do disaster recovery planning and why we make investments to protect our assets in our data centers. As can be easily expected, that night we had a major power outage and one of our data centers was off-line for several hours. Great timing. I wrote post about it the next day and fell on my sword.
We’ve experienced a situation where the line power from the local utility becomes unstable and goes off and on in a random, rapid fashion and the resulting up and down power fluctuation damages one of our backup power systems which then brings down the whole grid in the site. The backup system fails. Thank you.
We’ve also planned around the SARs outbreak and we had to split some of our teams into two groups that did not cross paths in order to keep each team separate and hopefully healthy. In the case of SARs, we even saw were some local governments were considering closing facilities where outbreaks were happening in order to contain the spread. How would your data center work if it was closed and local staff could not get in for a period of time? Have you planned for that one?
My favorite is when an contract electrician doing work in the site elects to shutdown power to the whole data center without talking to anyone in advance.
In short, as I posted earlier, you can’t really predict all the things that might go wrong.
I just spent the past few days at the Cisco CIO Summit and it was a great experience. Lots of discussion about web 2.0, security, telepresence and all wrapped up around the need to be have better collaboration inside organizations and between organizations. I think we are going to find that the next five years are going to be defined by better collaboration in the enterprise. We’ve got to keep reducing friction between individuals and teams and we’ve got to leverage talent all over the world more effectively. It just has to be because organizations have got to get more productive with the resources they have in place.
We’ve been using some of these ideas and tools with some of our teams with good results. More to do however.
Read Andrew McAfee’s new book when it comes out and plug into all the writings and dialog taking place around on the topic. There is also a great post on HBR called the Collaborative Imperative. And there is another really good post called A New Approach to Social Computing which I recommend.
In an earlier post, I wrote about Portfolio Management in IT being the hardest job for the CIO. I wanted to spend some more time on point 5 about the need to prioritize across the whole portfolio.
I really appreciated Peter’s comments and the links he provided and I would encourage anyone to go back and look at those posts. Lots of good ideas, challenges, and issues highlighted in his comprehensive post. I wanted to focus on a few key points when I think of balancing the whole portfolio or prioritizing the work
- It can’t really be done with an algorithm. An algorithm with weighting schemes and grading methods and ROI rules won’t solve the problem for you. It would be nice if Gartner Group or someone else could give us that algorithm, but it just doesn’t exist. At some point, all the attributes of your project/proposal (and all the other projects) need to be discussed and considered as a whole to determine what really needs to be done to move the company ahead. It comes down to conversation, engagement with the business and experience in identifying the work to do now.
- Experience is huge in this as an experienced team that has gone through business cycles, has kept informed about best practices in the market, and who is constantly learning can usually make the right decisions. This is part of the reason it is so important to retain your key talent because you lose that experience when they leave.
- The method used to prioritize or re-prioritize the work in IT might change over time. As the leadership team evolves, as experience is gained, as business changes, an approach that worked one time might not work two years later. In my experience, I’ve seen us go through several different methods, each of which was probably about right for that time.
- Don’t look for the one perfect answer that worked somewhere else. Instead, as mentioned in my earlier post and in other places like Peter’s great articles, start with a method and go from there. Adapt, learn, get faster. Don’t be proud, use all the good ideas you can find and adapt along the way.
- Conversations are probably the key. IT leaders need to be part of the leadership team, engaged in their strategy sessions and part of the conversation throughout the Enterprise. If those conversations are taking place, then the prioritization becomes much easier.
Finally, I think that over-communicating what IT is thinking to the rest of the business should be the rule. If you are the CIO; write a blog, publish a newsletter, send out emails to the team, speak when invited, engage everywhere. Talk about strengths, weaknesses, opportunities and threats. Try to be very transparent with your thinking, plans and results. I’ll write further on communications later.
The October 2009 issue of Harvard Business Review has a great article on risk that needs to be shared and read inside IT. We tend to look to the past to predict the future and as this article points out, that is a weak and ineffective position to take. We can’t really predict a 9/11 or a Katrina event very well. We think about standard deviations of impact, but a 9/11 event is far outside the expectation that might have been planned in advance.
Instead, we have to focus on being resilient organizations and have resilient processes and infrastructure. I read a book years ago about Managing The Unexpected which was quite good. It talks about managing risk on the deck of an aircraft carrier or in the control room of a nuclear reactor. Few of us have to manage those levels of risk and complexity, and lessons from those arenas might help us think differently.
I tell people that there are two areas that are always going to be worry areas for CIO and those are security and business continuity planning. We try to prepare the enterprise against all possible risks in those areas and as this article discusses, we can’t find all those risks. We can’t predict all the ‘black swans’. Instead, we have to figure out how to be resilent. We have to 1) make prudent investments up front, 2) practice what we can, 3) learn from others and then 4) focus on being nimble, fast and clear in our communications.
In the case of security and business continuity planning, we have to understand that ‘we don’t know what we don’t know.’ We can’t just look internally at our experiences and what we think. We must participate in the trade shows and conferences, listen to experiences of others and seek out input from a diverse set of sources.
I think the hardest problem in IT leadership is balancing the many needs against the limited resources and finding the best answers and path for the enterprise through this challenge. In the IT space, there is far more to do than can really be done, yet CIOs and those in leadership positions in IT are asked to do everything with those limited resources. Masters include different business units, different geographies, legal requirements, governance requirements, business continuity requirements, etc. Finding the balance point is the central challenge of our jobs.
I think you have to do some of the following:
- Have a real business problem or opportunity that is being solved. Not a fake one or one that can’t be quantified. There is a book called How to Measure Anything that might help with this. I like the Six Sigma problem statement approach of saying we are going to improve process X by Y% by a certain date. It doesn’t work or fit for everything, but it will work for lots of things.
- Define a template to describe what is going to be done and why it needs to be done. Require every project to address a problem statement, costs, timelines, issues, data definitions, disaster recovery needs, data center requirements, what is going to be turned off/on and when, and several other factors. If you use a standard template, then all parties know the questions that need to be answered in advance and you avoid missing some important part of the decision.
- On the cost front, I like the book Payback: Reaping the Rewards of Innovation by Andrews and Sirkin and their concepts around time and costs on projects. Time to Market, Time to Scale, Costs to Scale and Annual Support Costs are the key elements.
- Work out a program review process with the senior leadership team. Perhaps a committee reviews all programs over a certain cost or all programs that meet some strategic criteria.
- The committee has to think about the whole portfolio because you can’t consider programs one at a time. Relative priorities, sequencing, dependencies, etc. need to be considered. This is hard stuff. It requires thoughtful engagement from the committee and the leadership team of the company.
This 5th step is the probably the hardest part to work out and to keep consistently of high quality over a long period of time. This is one of the key places where the CIO needs engagement with the leadership team of the company and needs to be plugged into the strategy making of the company. I’ll write more on this another time.
Also, there has to be a delegated process to review the programs below the committee approval threshold. I saw a case study years ago of a company and their ‘model’ portfolio management process. The key review point in their process was at $2.5M and everything above that price required senior level reviews in a certain committee. In fact, they had a great reveiw process and I used elements of their process to modify our own. However, it also felt that it ignored the problem of all the projects below $2.5M.
Have you noticed that at large sporting and concert venues, you frequently can not access your mobile network due to the large number of people who are all trying to access the network at the same time? At one venue where I attend college football games mobile phone access on AT&T’s network is almost an impossibility due to the number of people all trying to call, text, look-up scores, etc. The inability to access the network means that one has to make other arrangements to connect with others at the event like we used to do before anybody had a mobile phone. Like ‘let’s meet after the 3rd quarter…’
So at these locations where people spend a lot of money on tickets to attend, why do you suppose that the stadium or arena doesn’t provide free wifi access?
At one location where I attend games they’ve built a new $260M football stadium and they do not provide free wifi. I sent someone a note about it during last year’s season and they said they’d look into it. Nothing happened so this year, I asked about it again and I was told it was extremely expensive they would not be providing wifi access. Extremely expensive? Hello? We have a new stadium and we didn’t plan to install wifi and it is too expensive to put in now? I have it at my home. I think I can get access at McDonalds and we have it at my church.
If you consider that many at the games are trying to look up other scores or sending simple messages or uploading pictures, doesn’t it seem to make a lot of sense to put in a wifi network that would 1) delight those who could access it and 2) unload a lot of traffic from the mobile networks making it more usable to all?
It seems to me that these organizations need to think differently and consider the ticket paying patrons in the stands who want to connect with others easily. Putting in wifi allowing parents to easily connect with their kids, allowing businesses to connect with their clients seems like a customer service issue, not a technical issue. It certainly isn’t a cost issue.