Do Competency Frameworks Work in Real-World Organisations?

Introduction

“Do Competency Frameworks Work in Real-World Organisations?”

This question about competency frameworks, psychometric tests and 360° surveys etc. is regularly posed in various forms on LinkedIn analytics groups, and inevitably generates a lot of debate. I’d like to discuss it here in the context of the phrase ‘real-world’ via a case-study.

The term “real world” reflects the perception of many senior and operational managers that while competency frameworks and psychometric tests may be effective in the controlled academic environments in which they are created and developed, they may be less effective in their ‘real-world’ organisations. Could there be any truth to their concerns?

Analysts and businesspeople define “effective” differently

To answer this, we need to delve more deeply into the meaning of the term “effective” in this context. Without getting overly academic, analytics practitioners usually consider a framework to be “effective” if it has sufficiently high construct validity in that it accurately measures the construct it claims to measure such as competency, ability, or personality say. Now contrast this with operational managers for whom the term “effective” usually refers to the predictive (or concurrent) validity of the measure; that is, the extent to which organisational investments in employee frameworks predict desirable employee outcomes such as job performance, retention and ultimately profitability.

And herein lies the rub: just because a framework accurately measures the construct it purports to measure (e.g. competency or ability), this construct validity does not mean that the instrument predicts important employee outcomes. It will therefore not necessarily deliver metrics that can be used as useful inputs to create talent programmes specifically designed to maximise performance and retention of high potentials.

How to establish the validity of a framework

So how do you establish the validity of a competency framework or 360° survey etc.?

  1. You first test concurrent validity on a representative sample of your employees to determine which competencies correlate with desired employee outcomes in the ‘real world’ i.e. in your organisation (as opposed to wherever the vendor claims to have tested it).
  2. You then refine the framework based on what you learn from these correlations.
  3. If the potential cost of framework failure is particularly high, you deploy a methodology to test causality/predictive validity.

A case study

Sadly, few organisations do such testing before procuring frameworks which can lead to unfortunate results. As an illustration of this, we recently examined trait and learnable competencies data for 250 management employees in the same role at a well-known global blue chip brand, together with their job performance scores (Fig 1).

Mediation case

We modelled them as:

  • Psychometric scores: Fixed characteristics unlikely to change over time and therefore potentially useful for selection if they correlated with performance
  • Competencies: Learnable behaviours as assessed by the company’s competency framework, 360° survey and development programme assessors. Useful for developing high performance.
  • Employee Performance: We were fortunate to obtain financial performance data for each manager which means these scales were reasonably objective. Ultimately, the reason for investing in psychometrics and competencies for selection and development is to achieve high performance here.

We found the following relationships in the data:

  • Psychometric and competency scores: As can be seen, only 47% of psychometric test scales correlated with learnable competencies. Effect sizes were small, typically less than 0.20 meaning that they would not be helpful for selecting job candidates likely to
    exhibit the competencies valued by the organisation. One psychometric test, however, did have a large correlation with performance; unfortunately this correlation was negative meaning that the higher the job candidates’ test scores, the lower their competency scores – hardly what the company intended when it bought this test.
  • Psychometric scores and Employee Performance: Only 7% of the psychometric test scales correlated with employee performance and again effect sizes were typically less than 0.20 – meaning that the psychometric tests were not useful for selection. Again, one test shared a negative relationship with performance: this means that if high scores were used for candidate selection, it is likely to select low performers.
  • Competencies and Employee Performance: Only 12% of the learnable competency scales correlated with performance – and again, effect sizes were small meaning that investing in these competencies to drive development programmes was not worthwhile. And once more, some of these competencies had negative correlations with performance, meaning that developing them might actually decrease employee performance.

Conclusion

So, to go back to the original question: Do competency frameworks and psychometrics work in real-world organisations? One could hardly blame managers in the above organisation for being somewhat sceptical. This scenario is not unusual and is typical of the results we find when auditing the effectiveness of companies’ investments into competency frameworks and other employee performance measures.

But does it have to be this way? I believe that the answer to this is no, and that frameworks can not only work, but can significantly improve performance in the ‘real world’.

All that is needed is some upfront analysis to obtain data such as I have outlined above – as that is what is needed to determine whether performance is being properly measured as well as which scales are and are not useful (so that poor scales can be removed and if necessary replaced with better predictors/correlates). Typically, we see employee performance improvements of 20% to 40% by simply following this process.

So the answer to the original question is ultimately a qualified “yes” – competency and employee frameworks can work provided that systematic rigour is used in selecting instruments for ‘real world’ applications.

How to get value from competency frameworks and psychometric tests

  1. Frameworks cannot predict high performance if you don’t know what good looks like. The first step in procuring frameworks is therefore to ensure that “high performance” is clearly defined and measurable in relation to your roles. And if you want to appeal to your customers – operational managers – use operational performance outcomes.
  2. Before investing in a framework that will potentially have a negative effect on your workforces’ performance, ensure you get an independent analytics firm’s objective evaluation of each scale’s concurrent and/or predictive capabilities in respect of employee populations similar to your own. If the vendor you’re considering purchasing from cannot provide independent evidence, request a discount or get expert advice of your own before purchasing anything.
  3. If your workforces are larger than around 250 employees, you should certainly get expert advice to ensure that a proposed framework does indeed correlate with or predict performance in your own organisation before making a purchase.
  4. If you need professional help interpreting quantitative evidence provided by vendors, get it from an independent analyst – never from a statistician working for the vendor trying to sell you the framework.

 

People Analytics: Who’s fooling who?

Participants at last week’s TMA Conference asked me to repost this blog: please find here as requested.

Introduction

I’m going to argue here that many organisations using people analytics to improve their workforce programmes are fooling themselves.

Let me explain: evidence-based people analytics relies on a model something like this:

HR programme –> Competencies –> Employee performance –> Org performance

That is, you invest in workforce programmes to increase employee competencies (“the how”) which in turn delivers increased employee and organisational performance (“the what”).

The role of people analytics is to calculate whether your people programmes do in fact raise employee competencies and performance. If the analytics shows that your programmes are not improving performance, it provides guidelines on how to fine-tune them so that they do.

Faulty competency and performance management frameworks

Over the past 15 years, I’ve asked many conference and workshop audiences the following questions about their performance management and competency frameworks:

1. Performance management: To what extent do you believe in your organisation’s performance ratings as measured say by your annual performance review? Do they objectively reflect your real behaviour, and are they a fair unbiased basis for your next promotion and salary increase (as opposed to your manager promoting whoever they feel like promoting)?

2. Competency management: To what extent do you believe that your organisation’s competency framework accurately captures the competencies required for high employee performance in your organisation?

By far the vast majority of audiences tell me that they believe in neither their competency nor performance management frameworks because, for example:

1. Competency frameworks: In most cases, the competency framework was brought in from outside and not created by managers who understand the real competencies required for high performance in their particular organisational culture. Thus their managers don’t believe in their competency framework and use it as a tick-box exercise. Furthermore, since the framework covers multiple job families, it is unlikely to predict performance across different job families e.g. is it really likely that salespeople and accountants require the same competencies for high performance? Research shows that these lists should contain very different competencies; yet most organisations use a “one size fits all” possibly with minor adjustments.

2. Performance management: Performance ratings are ultimately the subjective view of an all too human line manager. What chance then do employees have of an unbiased performance rating? (Vodafone is a notable exception here where evidence for performance ratings are verified by multiple people). Furthermore, many organisations use forced performance distributions meaning that only so many people can be high performers. Who can blame high performers for not believing in their performance management system or their chances of promotion when the forced distribution says “sorry but the top bucket is already full”?

GIGO: Garbage In, Garbage Out

So here’s the problem: if like most people you don’t believe in your organisation’s competency and performance management frameworks, then you certainly aren’t in a position to believe in the results of statistical analysis based on data generated by these frameworks. As the old acronym GIGO says, Garbage In, Garbage Out.

What is the solution? I’ll cover this in next week’s blog but as an interim taster:

1. Stop doing people analytics until you’ve fixed your frameworks. You’re wasting valuable resources on analysis based on data you don’t even believe in (and putting the data into an expensive database does not make the data any more valid)

2. Your need to design your own competency frameworks – one for each focal role. When I say “you”, I mean that this needs to be done by your line managers (if you want them to believe in it) and facilitated by you as HR e.g. using repertory grids

3. If you accept that managers are human and that any performance ratings will therefore always be subjective, find ways to minimise the impact of subjectivity by using multiple raters/rankers.

More on this next time.

The greatest impediment to building a people analytics function

People often debate the greatest impediments to building an people analytics function. Typical answers include lack of technical skills, lack of data, lack of senior management support, an organisational culture that doesn’t view employee behaviours as quantifiable, lack of funding, and so on.

While all of these contain some truth in my experience, the list lacks the number one barrier, so well expressed by Chris as his key takeaway after this week’s CIPD analytics workshop: “The obvious yet easily overlooked concept of finding the business problem first then working back from there”

So what do we mean by first finding a business problem and starting from there? The first thing most analysts learn is that analysing without first proposing a model is usually a waste of time because without a model, you may end up finding relationships that exist by chance but are in fact false. What is a model? Here’s an example:

Programme £ -> Competencies (how) -> Performance (what) -> Organisational objectives

This model says that:

1. You spend money on a people programme e.g. development, compensation, recruitment, employee relations, and so on

2. The only reason you spend this money is to increase the available pool of organisational competencies (otherwise known as human capital i.e. human capital is about competencies; human resources is about headcounts)

3. The only reason you want to increase the organisational competencies is to improve employee performance

4. The only reason you want to increase employee performance is to increase the achievement of organisational objectives

In other words, the model says we only spend money on people programmes to increase achievement of organisational objectives. (Side implication: So any programme that doesn’t result in the competencies which contribute to organisational objectives is not helping the organisation to achieve its objectives).

Now this model, like any model, has limitations e.g.

1. Some people argue we spend money on people programmes for reasons other than achieving organisational objectives e.g. we spend it as part of a corporate social responsibility to our employees (but try explaining that one to investors in public companies)

2. This model includes factors like “employee engagement” and “manager behaviour” as “competencies” (but this is just semantics – you come up with a better name than “competencies” for that box: some people refer to it as the “how” we get things done)

3. It’s difficult to prove directional causality e.g. how do we know that performance doesn’t “cause” a “competency” like engagement?

4. How do you measure competencies? Presumably via your competency framework; if you don’t have one, then use a Repertory Grid to develop one

5. How do you measure performance? Chances are you have a performance management framework so you could use that as a starter. If you don’t trust it, then this model gives you a good reason to fix it.

6. How do you measure achievement of organisational objectives for employees without P&L/budget responsibility? Answer: you’ll get tons of value by focusing on competencies and performance management for the first year or three; you can always come back to this afterwards.

But despite limitations like these, the model has two major benefits:

1. It translates directly from model to Excel spreadsheet (one row per employee, one column for each block in the model)

2. More importantly, it forces people to think about how every penny of HR money will result in the achievement of some organisational objective. As Doug Bailey, Unilever HRD said at a Valuing Your Talent event last month, “Whenever someone requests funding for an HR programme, I ask them how this will help us sell another box of washing powder”.

So how does all this link to Chris’s original comment about finding the business problem first? It means that unless an analytics initiative is helping to fix some unachieved organisational objective, it is worthless. Thus the rule is:

The only way to start any analytics programme is by first finding an important organisational objective that is not being achieved (preferably one that most of the board agree is a problem – that way you’re more likely to get a budget for analytics)

So the bottom line is that if you’re achieving all your organisational objectives, you don’t need analytics. For the rest of us, chances are there’ll be some organisational objective not being achieved.

Let me hasten to add that organisational objectives are people-related objectives that appear in your corporate/organisational business strategy like “reduce the number of people in the population with disease X” or “achieve revenues of £X”. But more importantly, organisational objectives are not HR objectives like “employee engagement” or “employee churn” or “absence rates”. In the above model, those would count only as Competency or Performance measures. (In other words, don’t confuse organisational objectives with HR objectives).

So after reading this, you might say, this is all so obvious – I mean how else could you start an analytics project? I’d say that 90% of the calls I get start with (and I’m sure other analytics consultants agree with me): “Our HR function has collected a lot of data over the past few years. Can you please help us do something with it?”. I say “Like what?” The reply is usually: “Help us to make HR look good”. I suggest that the best way to make HR look good is by enabling (helping) the business to achieve its organisational objectives. And the only way it will do that is by starting with the business problem and not with the data.

In fact starting with the data is like saying “We’ve got all these motor spares lying around; can you help us build a car with them?”. What are the chances you’ll have all the right spares to build a car? If your “organisational objective” or problem is to build a car, you’d surely first determine what parts you need. Then by all means do an audit of what parts you have lying around. Chances are you’ll find you only have 5% of the parts you need to achieve your objective; for the rest, you’ll have to go out and obtain the other 95% of parts you need. It’s exactly the same with data for analytics: for any given problem with achieving an organisational objective, you probably only have 5% of the data you need to solve it.

So there you have it: that’s why Chris felt that his biggest insight from the workshop was: “The obvious yet easily overlooked concept of finding the business problem first then working back from there”. And not gaining that insight is really the biggest impediment to creating an effective people analytics function.