Better Leadership + Business Skills = Better Projects

What drives project success? Research has consistently shown that it’s having an effective project manager. Results from PM College’s latest research, “Project Manager Skills Benchmark 2015,” confirms this, showing that organizations with highly skilled project managers get significantly better project results.

This result is hardly a surprise, but the magnitude of the outperformance is. Project managers at high skill levels outperform those with project managers at low skill levels – almost 50% better. In addition, high-performing organizations appear to emphasize skills beyond project management tools and techniques. High performers’ project managers excelled at leadership skills, especially displaying integrity and honesty, building relationships, and building trust and respect.

LeadersAndPMsDiffer

There is a lot more insight in the report but let me highlight one key finding.  As one might expect, project managers in all organizations need to improve across all areas of the talent triangle: leadership, business, and project management skills. Their skills are good to excellent in 15% of organizations and inadequate to fair in 30%.

However, senior leaders are far more likely than project managers to see benefits realization, project alignment with strategy, and poor communication as challenges. This perception gap extends to skill improvement priorities (see graph). Note that the biggest gaps are in leadership, business, and strategy skills: project managers

The study is available for download now. Stay tuned for an invite to our upcoming webinar to review and discuss these results. Hold the date and time: 18 June (2 PM Eastern).

Why Project Management Expertise Isn’t Enough: Lessons Learned from Security Breaches

How many times have I heard that “a good project manager can manage any project?” Too often for my taste. My biggest issue with the claim is that it begs the question: he statement assumes we all agree that any project manager with a mastery of the profession’s tools and techniques can succeed anywhere.

We’ve finally learned better, and PMI has acknowledged this in its new requirements for PMP continuing education. As PMI itself puts it:

As the global business environment and project management profession evolves, the [certification] program must adapt to provide development of new employer-desired skills…. The ideal skill set — the PMI Talent Triangle — is a combination of technical, leadership, and strategic and business management expertise. (PMI 2015 Continuing Certification Requirements (CCR) Program Updates)

Our pending research on project skill gaps (stay tuned for a webinar invite) shows that executives and senior managers understand this much better than project practitioners. They emphasize strategy, business, and leadership improvements, while practitioners don’t.

Perhaps an example from the current headlines will help. As most of you know, security breaches have wreaked havoc on a number of prominent firms: Target, Home Depot, Sony are simply the most well-known. The sad thing is that the most famous failures could have been prevented.

One of my new favorite podcasts is from Andreessen Horowitz, the venture capital firm. My most recent listen was an interview with Orion Hindawi of Tanium. I recommend listening to the whole thing — it’s less than 30 minutes — as Orion provides some great color to what, where, why, etc. on security attacks and vulnerabilities. The summary hits his sobering message on the head:

The paradox of security is we pretty much know what we are supposed to do most of the time — but we don’t do it. If you examine all the recent high-profile attacks, somebody in the organization knew something was wrong before it happened. They just didn’t have the ability to escalate the problem, or the ability to raise a flag that people took seriously.

In other words, we don’t lack the technical understanding of security risks, or the tools and techniques to mitigate them. We lack the leadership and business savvy to confront the challenge of communicating the risks, then deploying and using our toolkit effectively. The last two sentences show how these skills gaps drive the root causes:

  • Ability to escalate the problem” is a leadership challenge. This suggests that “somebody” wasn’t connected, articulate, or brave enough to get to decision makers.
  • Ability to raise a flag that people took seriously” is a symptom of weak strategy and business skills. If the threat isn’t framed, articulated, and understood in terms serious leaders get, then such warnings are ignored…or even worse, viewed as counterproductive scare mongering.

Find Your Best Project Leaders

My last post noted that filling gaps, improving skill mastery, and driving behavior change are the improvements that organizations need. But how can you design these objectives into your talent improvement program? If you have had a program in place, how do you know you have the right mix? And how do you  measure its impact on the organization?

Who are the truly competent initiative leaders in your organization? And how do you know?

Any competency improvement plan starts with identifying what the “truly competent” project or program manager looks like for the particular organization. We intuitively know that more competence pays for itself. And there is strong evidence for that intuition: it’s in our Building Project Manager Competency white paper (request here). But lasting improvement will only come from a structured and sustained competency improvement program. That structure has to begin with an assessment of the existing competency. Furthermore, the program must include clear measures of business value, so that every improvement in competence can be linked to improvements in key business measures.

My experience with such programs is that PMO and talent management groups approach the process in a way that muddles cause and effect. For example, a training program is often paired with PMO set-up. Fair enough. However, if the training design is put into place without a baseline of the current competence of your initiative leaders, then that design may perpetuate key skill or behavior gaps among your staff.  You may hit the target, but a scattershot strategy leans heavily on luck.

In addition, this approach will leave you guessing about which part of your training had business impact. You may see better business outcomes, but not have any better idea about which improved skills and behaviors drove them. Even worse, if your “hope-based” design and delivery is followed by little improvement, then your own initiative may well be doomed.

So how should you fix your program, or get it right from the start? We at PM College lay out a structured, five-step process for working through your competency improvement program.

  1. Define Roles and Competencies
  2. Assess Competencies
  3. Establish a Professional Development Program with Career Paths
  4. Execute Training Program
  5. Measure Competency and Project Delivery Outcomes Before and After Training

These steps were very useful for structuring my thinking, but they’re more of a checklist than a plan. For example, my PMOs almost always had something to work with in Steps 1 and 3. Even if I didn’t directly own roles and career paths, I had credibility and influence with my colleagues in human resources.  However, the condition of the training program was more of a mixed bag. Sometimes I would have something in place, sometimes I was starting “greenfield.”

The current state of the training program informs how I look at these steps.

  • Training program in place: My approach is to jump straight to Step 5, and drive for a competency and outcome assessment based on what went before.  I assume steps 1-4 as completed – even if not explicitly – and position the assessment as something that validates the effectiveness of what came before. In other words, this strategy is a forcing function that stresses the whole competence program, without starting anew.
  • No training program in place: I use the formal assessment to drive change. As PMO head I have been able to use its results to explicitly drive the training program’s design. More significantly, these results are proof points driving better role and career path designs, even if HR formally owns those choices.

PM College has a unique and holistic competency assessment methodology that looks at and assesses the knowledge, behaviors, and job performance across the project management roles in your organization. As always, if your organization would like discuss our approach, and how it drives improved project and business outcomes, please contact me or use the contact form below. We’d love to hear from you.

FYI: For more reading on competency-based management, check out Optimizing Human Capital with A Strategic Project Office.

McKinsey: Simulation key to how effective organizations build staff capabilities

I’ve seen the impact of leadership development on organizations: it’s why I joined PM College. One of the challenges is to determine which methods work best to drive transformation, or accelerate improvements one has already reaped. Our firm has experience and research that pins this down, but it’s always nice to find a third-party that confirms what we know and believe.

McKinsey to the rescue, with a new survey on “Building Capabilities for Performance.” The survey refreshes data from a 2010 study, and found that:

… the responses to our latest survey on the topic suggest that organizations, to perform at their best, now focus on a different set of capabilities and different groups of employees to develop.

In other words, the best performers did personnel development differently.

What did they do? The first finding that struck me was the use — or disuse — of experiential learning: McKinsey model factories or simulations as examples. The most effective organizations used these methods more than four times more frequently than others. But even then, experiential learning was used sparingly, by just under a quarter of the top performers.

As long-time Crossderry readers know, I’m a big fan of simulations. We had great experience with them at SAP. As McKinsey notes, they are about the only way “to teach adults in an experimental, risk-free environment that fosters exploration and innovation.” To that end, several popular PM College offerings — Managing by Project, its construction-specific flavor, and Leadership in High Performance Teams — use simulations to bring project and leadership challenges alive…without risking real initiatives.

I’ll have more on other success factors — custom content and blended delivery — in following posts.

Crossderry Podcast #1 — 11 November 2014

Here is the first Crossderry podcast. I plan to do this roughly once a week. The topics are: Apple Watch as threat to Swiss watch industry, Quick hitter tweet review: Team size, platform category errors, and salespeople who do not know anything about their customers.

Enjoy!

Links:

The Allure of Doomsaying

I just finished this Grantland piece by Bryan Curtis on the imminent demise of baseball. If you’re a fan at all — or a fan of any long-standing pastime — you’ve probably read or heard complaints like this:

Somehow or other, they don’t play ball nowadays as they used to some eight or ten years ago. I don’t mean to say they don’t play it as well. … But I mean that they don’t play with the same kind of feelings or for the same objects they used to. … It appears to me that ball matches have come to be controlled by different parties and for different purposes …

The kicker is that this quote is from 1868, eight years before the founding of the National League. It turns out that there’s a long thread of end-times commentary stretching back to the beginning of the Major Leagues, and Curtis unspools it carefully and well.

These persistent predictions hint at one of the reasons that doomsayers will never want for work: all human institutions, no matter how long-lived, will wax and wane. Predicting an institution’s demise, as Curtis describes it:

…allows us to imagine we’re present at a turning point in history. We’re the lucky coroners who get to toe-tag the game of Babe Ruth, Ted Williams, and Kurt Bevacqua.

“We are not at a historic moment,” Thorn said. “The popularity of anything will be cyclical. There will be ups and downs. If you want to measure a current moment against a peak, you will perceive a decline. J.P. Morgan was asked, ‘What will the stock market do this year?’ His answer was: ‘Fluctuate.’”

One driver that Curtis doesn’t mention is the control that failure gives us. There’s a certain temperament — and I plead guilty — that is very comfortable with the dodge Richard Feynman mocks here:

All the time you’re saying to yourself, ‘I could do that, but I won’t,’–which is just another way of saying that you can’t.

Making a positive forecast about, in this case, baseball, would put us in the uncomfortable position of predicting success for something we can’t control. It is hard to create and achieve success in this world and nothing lasts forever. The sure bet is on the “can’t” in Henry Ford’s “Whether you think you can, or you think you can’t–you’re right.

As everyone say, please read the whole thing.

The Apple 8.0.1 Debacle: Whom to blame?

Marc Andreessen drew my attention to a Bloomberg article that laid out what it purported to be “links” with the failed Maps launch. @pmarca was properly skeptical of the article:

And indeed, the piece starts in on the leader of the quality assurance effort, noting that:

The same person at Apple was in charge of catching problems before both products were released. Josh Williams, the mid-level manager overseeing quality assurance for Apple’s iOS mobile-software group, was also in charge of quality control for maps, according to people familiar with Apple’s management structure.

If you didn’t read any further, you’d think the problem was solved. Some guy wasn’t doing his job. Case closed.

But are quality problems ever so simple? After all, Isn’t quality supposed to be built into a product? If this guy was the problem, then why was Apple leaning so heavily on him to lead its bug-finding QA group?

Well, reading on is rewarding, for it becomes clear that the quality problems at Apple run deeper than a bad QA leader. For example, turf wars and secrecy within Apple make it so:

Another challenge is that the engineers who test the newest software versions often don’t get their hands on the latest iPhones until the same time that they arrive with customers, resulting in updates that may not get tested as much on the latest handsets. Cook has clamped down on the use of unreleased iPhones and only senior managers are allowed access to the products without special permission, two people said.

Even worse, integration testing is not routinely done before an OS feature gets to QA:

Teams responsible for testing cellular and Wi-Fi connectivity will sometimes sign off on a product release, then Williams’ team will discover later that it’s not compatible with another feature, the person said.

So all you Apple fans, just remember the joke we used to make late in a project: “What’s another name for the release milestone? User Acceptance Testing begins!”

Follow

Get every new post delivered to your Inbox.

Join 11,914 other followers

%d bloggers like this: