Project problems can’t be solved with an operational focus


loves distance by peggyopal

Straight from the Project Management Institute’s web site (and the PMBoK) is this definition of a Project

It’s a temporary group activity designed to produce a unique product, service or result.A project is temporary in that it has a defined beginning and end in time, and therefore defined scope and resources.

And a project is unique in that it is not a routine operation, but a specific set of operations designed to accomplish a singular goal. So a project team often includes people who don’t usually work together – sometimes from different organizations and across multiple geographies. 

 Unfortunately, very often, projects are assessed by using metrics that are not about identifying unique & temproary activities.  Rather, persistent, on-going measures such as average weekly costs or hours worked or material dollars spent are used to determine if a project is running as it should.

Unfortunately, these sort of measurements are more attuned to understanding operations because they establish linear costs over time.  Project have peaks and valleys, spikes and low points, periods of tremendous activity and periods when they have very little at all.  Whether or not they should is a different question – there’s certainly plenty of room for levelling out the workload in projects and avoiding these ups and downs on the individual person, however, there are still times in the life of a project when you  may have multiple people working simultaneously on different sub-projects, and times when only 1 or 2 activities need to be going on.

As such, run rates for a project are erratic, as they should be.  Attempting to smooth costs on the entire project is dangerous.  It leads to people lingering on the project with little to do, just to keep the expenses constant.  Individuals or the departments they report to in a matrix need to keep spending flat.  Projects, however, are characterized by their temporary nature, and the ramping up and down of expenses can be considered an indication of efficency, not inefficiency.

There is, of course, much merit to the argument that bouncing people on and off a team leads to a loss of learning, mometum and flow – so it is better to have folks on the team continue to add cost, even if there is little for them to do.  I agree – right up until they begin producing work just for the sake of producing it.  If there is nothing of value for them to contribute, but disassembling the team creates a long-term problem, then look for learning opportunies within the project.  Have people sit in on working sessions outside of their functional area.  You might find people are adaptable to lots of different tasks, and this type of cross-training is invaluable when you need a pinch-hitter for an unexpected crisis.

Nonetheless, even when analyzing the cost reports for these activities, be very aware of who is doing what, such that you can distinguish between time spent adding genuine value through the transformation of work products and time spent in learning & watching.  Doing so will prevent assessing projects as the outcome on-going costs and, instead, allow you to determine the specific costs that create specific results which, in turn, allows for investigations into better methods for producing the same results.



Why your PMP prep doesn’t feel like reality (and why it shouldn’t)

A Break in Reality

A Break in Reality by xetobyte

I am in the midst of a PMP prep examination these days, diving deeper into the project PMI’s methodology for project management than I ever have before.  Despite more than a decade of working on nothing but project & program teams, I’ve never gone after PMI certification.

True to my affinity for Lean thinking, I don’t put much stock in these type of certifications.  The class is bearing out that the intent is simply to pass the test, not build better project managers.  Everything is about the test, the test, the test – and there is very little about the development of the principles taught and how they came about.  Just. Pass. The. Test.  The test is also intentionally deceptive – minor turns of a phrase mean different things in “PMI Land” as the instructors like to call it.  A big part of passing the exam is tuning your eye to catch these clever little interpretations and usages – a skill which is useful for only 1 project: passing the test.

It is easy to understand why so many fellow students get frustrated and jokingly state that the exam does not reflect reality.  Unfortunately, what seems to get lost, is that it’s not supposed to.


As I study the guidebooks for this class that are introducing us all to the PMI concepts, I am harking back to my Lean training and the years I’ve spent contemplating Operational Excellence through my writings on this blog.  In my mind are the oft-repeated Lean-thinking mantras: “Theory guides practice” and “There can be no improvement without a standard.”  Thank you, Dr. Deming and Mr. Shingo (and, please, OpEx gurus out there – correct me if I am quoting them wrong.)

I feel lucky to have the benefit of my time spent trying to understand the Lean paradigm because it is offering so much insight into what the PMI framework is trying to do.  It is establishing a standard.  It is offering a methodology for managing projects against which all other management styles, and outcomes, can be measured.  In a way, it depicts the ideal – if all projects, everywhere, operated in the way the PMI describes, then all projects would deliver on time, within budget, and with inputs from all stakeholders at every level of the organization – including customers.

Is that reality?  No.  Of course not.  If the standard was reality, there’d be no need to set up a test for it.  A standard is not meant to depict reality.  What it does do, however, is give us an ideal scenario against which to judge and measure the current state.  How far from this standard are we?  Did we make an intelligent deviation, based on detailed analyses of how our environment differs from that depicted in the standard, or did we simply throw up our hands and say, “But this is the way we’ve always done it?” (Or words to that effect, such as “I’ve never seen that” or “That just won’t work here.”)

When theory doesn’t match reality, there are 2 options:  Change the theory to match reality, or change reality to match the theory.  Those who argue the PMI framework just isn’t reality will be the ones trying to change theory in order to better align with their expectations – nearly all of which demonstrate a daunting tolerance for inefficiency & waste.  On the other hand, if you accept that the “theory” is really just a depiction of the ideal – you instantaneously give yourself something to work towards.  It is the “true north” of the program & project management world – to have a perfectly managed, documented, planned, monitored, tracked and executed set of activities that are completely understood and performed by all stakeholders.

My advice for those who are poo-pooing the PMI framework as nothing more than an academic exercise designed to pass a test (which, to some extent, it is), is to think of the methods provided within the framework a bit differently.  The tools and techniques they teach are not  a set of instructions on how to effectively manage projects.  Think of them, instead, as a depiction of a perfect universe – and use that depiction to begin thinking about the gaps between your current reality and the PMI’s idealized scenarios.


Project Management & Measurement gamed


Measurement_ by spacesuitcatalyst on

In a recent article on his blog, Dan Markovitz offered this statement:

“One problem with stretch goals, I believe, is that they focus on outcome metrics, and can therefore be gamed.”

That got me thinking about how we analyze and measure progress on business projects.  All too often, I have seen project leads engage in gaming metrics as if their ability to adjust the numbers was the real purpose of their jobs.  As I indicated in my comments on Dan’s article, “How do we make these numbers look better” is an operational question, not a spreadsheet exercise.

Yet, project management tends to be all about outcome metrics.  Tracking costs vs. plan, Earned Value, Cost and Schedule Performance Indices, consumed slack – all are about what happened.  Granted, there’s an effort inherent to those practices that says the future can be predicted by understanding the past, however, that approach also seems to indicate that errors are acceptable.  Especially if we read a bunch of charts and graphs and variance analyses to tell us that we had a problem some number of days, or weeks, ago.

Somehow, that doesn’t seem good enough.

I’ve seen far too many managers who are completely distant from the day-to-day operations of the projects and processes that ought to be occurring right under their noses but, unfortunately, have become dependent upon analyst-manipulated reports to convey information.  This reality only seems to point out the importance of applying concepts such as leader standard work  to project management and controls.  Simply put – leaders need to have an understanding of not just the outcomes and results of their team’s efforts, but to be inherently familiar with the mechanisms and processes by which that work gets done.

When those processes are not measured, analyzed and understood – all that’s left is a pile of reports measuring the outcomes.  When those metrics don’t show things as expected, then there’s a loose interpretation of the inputs that goes on – in order to justify the manipulation of numbers before they get passed on to the next level of review.  “The data doesn’t reflect reality” or “we need to make an adjustment to the numbers” is heard all too often.  If the data is so easily, and subjectively, corrected then its method of collection can also be easily corrected, too so that good information is passed along, not manually tweaked information that results in nothing more than watermelon reporting (the phenomenon by which red projects get greener as the reporting moves higher up).

The best way out of this?  Understanding the way in which the metrics are compiled is one.  The better solution, however, is to be involved.  Know who is working on the team, and why, and how they work, and what they are working on.  Engage in the human elements and be aware of not only what is going on, but what should be going on.  This places a premium on good planning and strong servant leadership.

The worst project & program managers I’ve ever worked with very often never looked at the reports that were provided to them.  They simply didn’t understand the information and/or believed that force of personality was sufficient to effect positive outcomes.  Some of the best I’ve worked with didn’t read the reports, either – because there was nothing in all that data that they did not know already.

You are at the mercy of your analysts (and you don’t even know it)


Analysis by 0HerMajesty0 on

I have a question for all those in leadership positions, who spend some amount of time on a weekly, monthly, quarterly or annual basis reviewing reports and presentations to get a handle on their business.  My question is this – For all the data you review – how was it put together?

I am of the opinion that the vast, overwhelming majority of those who review charts and graphs have no idea how the data was collected, downloaded, analyzed and ultimately converted into a presentation.  How much fudging, adjusting, hide this, don’t show that – goes on?

Here’s a fundamental truth: If you do not know how the number was constructed, you do not understand the number.

Anyone who has worked for any amount of time doing staff-level analytic work invariably knows that, when reports are presented, people will glance over the numbers looking for anomalies but never bother to understand the computation leading to what is in that report.  Any attempt to explain the methodology results in blank stares, glassy eyes and, in many cases, utter disdain for wasting time explaining the math.

Unfortunately, what matters more than the number is the methodology.  Information can be excluded and massaged.  It can be changed to put a positive spin on the situation.  As such, acting on information you don’t fully understand can lead to a disaster.

You can’t just focus on the “results” – meaning the data that you are presented in some pre-formatted set of documents.  Focus, instead, on the process that gets the results, the planning that creates the process, the knowledge and experience that develop the plan, the personalities that shape the knowledge and experience…..this is where the foundation for the performance you hope to see begins anyway.   The complex interplay of personalities is what drives team and group performance.  If you have the wrong mix, or a homgenous no-mix-at-all situation, then you’re clearly never going to get to a sustainable good result.

Instead, what you will get is a hodge-podge of suboptimal processes and broken business operations.  You’ll never know it, however, because the way the data surrounding those deficiencies is presented will be mashed into presentable form by an army of analysts, and that process will result in truly useful information being obscured, eliminated and misinterpreted.

Analysts – Go beyond the metrics

Hidden 8ball

Hidden 8Ball by ThOtherCraftLord on

I’ve spent more than a decade working as an analyst, developing budgets and funding requests, proposal estimates, and creating and monitoring metrics to help identify trends in cost & schedule performance.  In some cases, the projects I worked on were funded for tens of millions, went on for several years, and cut across departmental boundaries, competing corporations, and government clearances.

In that time, I have worked with many astute analysts who can just about turn MS Project on its ear and all but make Excel and Access sit up and beg.  Unfortunately, I’ve also watched as those analysts continue to crunch through their data, producing the same metrics, for the same people, to show the same problems and performance trends…year after year…project after project.

While all of them have their theories on how best to detect performance issues and provide early warning of impending doom, there are rarely efforts to explain how those problems came about in the first place.  Fact of the matter is, there aren’t a whole lot of project, program, operations, sales, management or budget analysts out there who are willing, or able, to go beyond the numbers and look for the genuine, root causes.  In fact, doing so tends to get you in quite a bit of hot water.

Why do projects and programs fail repeatedly?  Why do businesses pursue “opportunities” far beyond their organizational capabilities?  Why do cost-cutting initiatives, reorganizations, and incentive programs never seem to yield the expected results?  There’s always someone who will point to an inaccuracy in the data, perhaps going an extra step to identify a flawed methodology that produced the numbers.  That usually results in another round of gathering new data and conducting more analyses – or dropping the initiative altogether since “we can never get this right.”

Unfortunately, constantly crunching numbers …. and looking for more ways to invest in more applications …. to produce more measurements …. isn’t where the root of the problem lies.  If your analysis stops only at what activities took place, without an understanding of how people, organizations (and people within organizations) work, you will never correct the underlying problem.  You have to look at the behavioral elements: How did personalities determine the approaches used?  How did the interplay of those personalities lead to the outcome?

If we invest more in understanding people and less in understanding numbers, we’ll start to see the root causes behind performance problems.  More than that, we’ll understand how certain patterns of decision making are inevitable when mixes of personality come into play.  If you aren’t looking for the intangibles….hat produce the behaviors…that drive the decisions…that yield the measurable outcomes – you’re only looking at the tip of the iceberg when trying to solve performance problems at any level.