Find Your Best Project Leaders

My last post noted that filling gaps, improving skill mastery, and driving behavior change are the improvements that organizations need. But how can you design these objectives into your talent improvement program? If you have had a program in place, how do you know you have the right mix? And how do you  measure its impact on the organization?

Who are the truly competent initiative leaders in your organization? And how do you know?

Any competency improvement plan starts with identifying what the ‚Äútruly competent‚ÄĚ project or program manager looks like for the particular organization. We intuitively know that more competence pays for itself. And there is strong evidence for that intuition: it‚Äôs in our Building Project Manager Competency white paper (request here). But lasting improvement will only come from a structured and sustained competency improvement program. That structure has to begin with an assessment of the existing competency. Furthermore, the program must include clear measures of business value, so that every improvement in competence can be linked to improvements in key business measures.

My experience with such programs is that PMO and talent management groups approach the process in a way that muddles cause and effect. For example, a training program is often paired with PMO set-up. Fair enough. However, if the training design is put into place without a baseline of the current competence of your initiative leaders, then that design may perpetuate key skill or behavior gaps among your staff.  You may hit the target, but a scattershot strategy leans heavily on luck.

In addition, this approach will leave you guessing about which part of your training had business impact. You may see better business outcomes, but not have any better idea about which improved skills and behaviors drove them. Even worse, if your ‚Äúhope-based‚ÄĚ design and delivery is followed by little improvement, then your own initiative may well be doomed.

So how should you fix your program, or get it right from the start? We at PM College lay out a structured, five-step process for working through your competency improvement program.

  1. Define Roles and Competencies
  2. Assess Competencies
  3. Establish a Professional Development Program with Career Paths
  4. Execute Training Program
  5. Measure Competency and Project Delivery Outcomes Before and After Training

These steps were very useful for structuring my thinking, but they‚Äôre more of a checklist than a plan. For example, my PMOs almost always had something to work with in Steps 1 and 3. Even if I didn‚Äôt directly own roles and career paths, I had credibility and influence with my colleagues in human resources. ¬†However, the condition of the training program was more of a mixed bag. Sometimes I would have something in place, sometimes I was starting ‚Äúgreenfield.‚ÄĚ

The current state of the training program informs how I look at these steps.

  • Training program in place: My approach is to jump straight to Step 5, and drive for a competency and outcome assessment based on what went before.¬† I assume steps 1-4 as completed ‚Äď even if not explicitly ‚Äď and position the assessment as something that validates the effectiveness of what came before. In other words, this strategy is a forcing function that stresses the whole competence program, without starting anew.
  • No training program in place: I use the formal assessment to drive change. As PMO head I have been able to use its results to explicitly drive the training program‚Äôs design. More significantly, these results are proof points driving better role and career path designs, even if HR formally owns those choices.

PM College has a unique and holistic competency assessment methodology that looks at and assesses the knowledge, behaviors, and job performance across the project management roles in your organization. As always, if your organization would like discuss our approach, and how it drives improved project and business outcomes, please contact me or use the contact form below. We’d love to hear from you.

FYI: For more reading on competency-based management, check out Optimizing Human Capital with A Strategic Project Office.

McKinsey: Simulation key to how effective organizations build staff capabilities

I’ve seen the impact of leadership development on organizations: it’s why I joined PM College. One of the challenges is to determine which methods work best¬†to drive transformation, or¬†accelerate improvements one has already reaped. Our firm has experience and research that pins this down, but it’s always nice to find a third-party that confirms what we know and believe.

McKinsey to the rescue, with¬†a new survey on “Building Capabilities for Performance.” The survey refreshes data from a 2010 study, and found that:

…¬†the responses to our latest survey on the topic suggest that organizations, to perform at their best, now focus on a different set of capabilities and different groups of employees to develop.

In other words, the best performers did personnel development differently.

What did they do? The first finding that struck me was the use — or disuse —¬†of experiential learning: McKinsey model factories or simulations as examples. The most effective organizations used these methods more than four times more frequently than others. But even then, experiential learning was used¬†sparingly, by just under a quarter of the top performers.

As long-time Crossderry readers know, I’m a big fan of simulations. We had great experience with them at SAP. As McKinsey notes, they are about the only way “to teach adults in an experimental, risk-free environment that fosters exploration and innovation.” To that end, several¬†popular PM College¬†offerings — Managing by Project,¬†its¬†construction-specific¬†flavor,¬†and Leadership in High Performance Teams¬†—¬†use simulations to bring project and leadership challenges alive…without risking real initiatives.

I’ll have more on other success factors — custom content and blended delivery — in following posts.

Crossderry Podcast #1 — 11 November 2014

Here is the first Crossderry podcast. I plan to do this roughly once a week. The topics are: Apple Watch as threat to Swiss watch industry, Quick hitter tweet review: Team size, platform category errors, and salespeople who do not know anything about their customers.

Enjoy!

Links:

The Allure of Doomsaying

I just finished this Grantland piece by Bryan Curtis¬†on the imminent demise of baseball. If you’re a fan at all — or a fan of any long-standing pastime — you’ve probably read or heard complaints like this:

Somehow or other, they don’t play ball nowadays as they used to some eight or ten years ago. I don’t mean to say they don’t play it as well. … But I mean that they don’t play with the same kind of feelings or for the same objects they used to. … It appears to me that ball matches have come to be controlled by different parties and for different purposes …

The kicker is that this quote is from 1868, eight years before the founding of the National League. It turns out that there’s a long thread of end-times commentary stretching back to the beginning of the Major Leagues, and¬†Curtis unspools it carefully and¬†well.

These persistent¬†predictions hint at one of the reasons that doomsayers will never want for work: all human institutions, no matter how long-lived, will wax and wane. Predicting an institution’s demise, as Curtis describes it:

…allows us to imagine we‚Äôre present at a turning point in history. We‚Äôre the lucky coroners who get to toe-tag the game of Babe Ruth, Ted Williams, and Kurt Bevacqua.

‚ÄúWe are not at a historic moment,‚ÄĚ Thorn said. ‚ÄúThe popularity of anything will be cyclical. There will be ups and downs. If you want to measure a current moment against a peak, you will perceive a decline. J.P. Morgan was asked, ‚ÄėWhat will the stock market do this year?‚Äô His answer was: ‚ÄėFluctuate.‚Äô‚ÄĚ

One driver that Curtis doesn’t mention is the control that failure gives us. There’s a certain temperament — and I plead guilty — that¬†is very comfortable with the dodge Richard Feynman mocks¬†here:

All the time you’re saying to yourself, ‘I could do that, but I won’t,’–which is just another way of saying that you can’t.

Making a positive forecast about, in this case, baseball, would put us in the uncomfortable position of predicting success for something we can’t control. It is hard to create and achieve success in this world and nothing lasts forever. The sure bet is on the “can’t” in Henry Ford’s “Whether you think you can, or you think you can’t–you’re right.

As everyone say, please read the whole thing.

The Apple 8.0.1 Debacle: Whom to blame?

Marc Andreessen drew my attention to a Bloomberg article that laid out what it purported to be “links” with the failed Maps launch. @pmarca was properly skeptical of the article:

And indeed, the piece starts in on the leader of the quality assurance effort, noting that:

The same person at Apple was in charge of catching problems before both products were released. Josh Williams, the mid-level manager overseeing quality assurance for Apple’s iOS mobile-software group, was also in charge of quality control for maps, according to people familiar with Apple’s management structure.

If you didn’t read any further, you’d think the problem was solved. Some guy wasn’t doing his job. Case closed.

But are quality problems ever so simple? After all, Isn’t quality supposed to be built into a¬†product? If this guy was the problem, then why was Apple leaning so heavily on him¬†to lead its bug-finding QA group?

Well, reading on is rewarding, for it becomes clear that the quality problems at Apple run deeper than a bad QA leader. For example, turf wars and secrecy within Apple make it so:

Another challenge is that the engineers who test the newest software versions often don’t get their hands on the latest iPhones until the same time that they arrive with customers, resulting in updates that may not get tested as much on the latest handsets. Cook has clamped down on the use of unreleased iPhones and only senior managers are allowed access to the products without special permission, two people said.

Even worse, integration testing is not routinely done before an OS feature gets to QA:

Teams responsible for testing cellular and Wi-Fi connectivity will sometimes sign off on a product release, then Williams’ team will discover later that it’s not compatible with another feature, the person said.

So all you Apple fans, just remember the joke we used to make late in a project: “What’s another name for the release milestone? User Acceptance Testing begins!”

Why personal behaviors impact testing

My last post used testing¬†to illustrate¬†the consequences of questionable personal behavior on a business situation.¬† Quality is susceptible to personal and professional gaps¬†that interact to amplify each other’s effects.

Why is that so?¬† Let’s start with the examples I used.¬† Recall that business process owners simply copied the unit tests of the developers to serve as¬†user acceptance tests.¬†¬† I characterized this approach as a failure of accountability: the process owners didn’t believe it was their “real” job, even though they knew they would have to certify the system was fit for use.¬† Less charitably, one could have called it laziness.¬† More charitably, one could have called it efficiency.¬†

And indeed, an appeal to efficiency underlay the rationalizations of these owners: “Why should I create a new test when the developer — who knows the system better than I do — has already created one?”¬† How would you answer this question?¬† As a leader, do you know the snares such testing practices lay in your path?¬† Off the top…

  1. Perpetuating confirmation bias:¬† By the time someone presents work product for formal, published testing, he or she has strong incentives to conduct testing that proves that the work product is complete.¬† After all, who doesn’t want his work to be accepted or her beliefs confirmed?¬†¬† This issue is well-known in the research field, so¬†one should expect that even the most diligent developer will tend to select testing that confirms that belief.¬†¬† An example is what on one project we called the “magic material number”, a material that was used by all supply chain testers to confirm their unit and integration tests.¬† And the process always worked…until we tried another part number.
  2. Misunderstanding replicability:¬† “Leveraging” test scripts¬†can be made to sound like one is replicating the developer’s result.¬† I have had testers justify this short cut by appealing to the concept of replicability.¬† Replicability is an important part of the scientific process.¬† However, it is a¬†part that is often misunderstood or misapplied.¬† In the case of copied test plans, the error is simple.¬†¬†One is indeed following the process test exactly — a good thing — but applying it to the same test subject (e.g., same part, same distribution center, etc.).¬† This technique means that the test is only applied against what may be “a convenient subset” of the data.
  3. Impeding falsifiability: This sounds like a good thing, but isn’t.¬† In fact, the truth of a theory — in this case, that¬†the process as configured and coded conforms to requirements — is determined by its “ability to withstand rigorous falsification tests” (Uncontrolled, Jim Manzi, p. 18).¬† Recall the problem with engaging users in certain functions?¬† These users’ ability to falsify tests makes their disengagement a risk to projects.¬† Strong process experts, especially those who are not members of the project team, are often highly motivated to “shake down” the system.¬† Even weaker process players can find gaps when encouraged to¬†“do their jobs” using the new technology using whatever parts, customers, vendors, etc. they see fit.

I hope this example shows¬†how¬†a personal failing damages¬†one’s professional perspective.¬† No one in this example was ignorant of the scientific method; in fact, several had advanced hard science or engineering degrees.¬† Nonetheless, disagreement who owned verifying fitness for use led to¬†rationalizations¬†about fundamental breaches in testing.

How personal shortcomings undermine recovery (Mini Case Part 2)

Unfortunately, our quality control processes didn’t fare so well. We did get sufficient testing resources for the first rollout, but a couple of process owners only delivered under protest. For you see, they believed that testing of their processes — even user acceptance testing (UAT) — was not their job. To put it another way, they did not hold themselves accountable to ensure that the technical solution conformed to their processes’ requirements.

This personal shortcoming — an unwillingness to¬†be accountable — triggered a chain of events that put the program right back in a hole:

  • Because it wasn’t their “real” job, some process owners did not create their own user acceptance tests. They simply copied the tests the developers used for unit or integration testing. Therefore, UAT did not provide an independent verification of the system’s fitness for use; it simply confirmed the results of the first test.
  • This approach also allowed process gaps to persist. Missing functionality that would have been caught with test plans that ensured process coverage went unnoticed.
  • Resources for testing were provided only grudgingly and were often second-rate. They often did not know enough about system and process to run the scripts, never mind verify the solution or notice process gaps.

To say it was a challenging cutover and start-up would be an understatement.¬† Yawning process gaps remained open because they had never been tested.¬† For sure, we had a stack of¬†deliverable acceptance documents, all¬†formally signed off.¬† What we didn’t have was a process that was enabled and fit for use.¬† One¬†example:¬†

  • Documents remained stuck in limbo for weeks after go live, all because a key approval workflow scenario had not even been developed.
  • And because it hadn’t been developed, the developers hadn’t created and executed a test script.
  • And because the process owners were so focused on doing only their “real” job, they missed a gap that made us do business by hand for nearly two months.
Follow

Get every new post delivered to your Inbox.

Join 11,005 other followers

%d bloggers like this: