Increasing People Analytics business impact: A simple core capabilities strategy

Introduction

Like any business function, People Analytics is under constant scrutiny to support claims that it adds sufficient corporate value to justify its continued existence. 

One sure way to achieve this is to ensure that People Analytics priorities are aligned with corporate strategy. 

Many People Analytics functions, however, are forced to align their priorities with their parent HR function. This is unfortunate because in many (most?) cases, HR is itself not aligned with corporate strategy. This has the direct effect of reducing the impact of People Analytics on the business. 

This article offers a process for aligning People Analytics priorities with corporate strategy rather than simply aligning with HR’s weltanschauung and is based on the well-known Prahalad & Hamel core capabilities model. 

The article contains the following sections:

  1. What are core capabilities?
  2. Key principles for managing core capabilities
  3. How do core capabilities relate to People Analytics?
  4. Bringing it together: A strategy for prioritizing People Analytics investments

1. What are core capabilities?

Prahalad & Hamel in their now-famous 1990 paper argue that a business’s sustainable competitive advantage depends on its ability to develop core capabilities more rapidly than its competitors. For this reason, the core capabilities model is used by most – if not all – large organizations as a key component of corporate strategy development.

(Note: Some writers use the terms strategic and non-strategic capabilities interchangeably with core and non-core capabilities: such usage is compatible with this article).

So the first question to be asked then is what exactly are core capabilities?

Defining core capabilities

Before defining core capability, we need a working definition of the term capability. When it comes to prioritising People Analytics initiatives, we define an organization’s capabilities as its collective knowledge, resources, processes, technologies and structures.

Having defined capability, we now define core capability. Prahalad & Hamel define core capabilities as “the collective learning in the organization, especially how to coordinate diverse production skills and integrate multiple streams of technologies”.

This is not a very usable (operationalizable) definition and to define core capabilities, we will therefore use Jay B. Barney’s VRIO framework which states that in order to qualify as a core capability, a capability must meet the following criteria:

1. Value: In order to be core, a capability must generate customer value 

For example, a company that relies on innovation for gaining customer market share would classify employees that do innovative work as core capabilities. In contrast, employees whose work is routine and not innovative would be classified as non-core capabilities. 

2. Rarity: In order to be core, a capability must be rare

In a company that mines rare minerals such as lanthanum, gallium or manganese, the rights for mining these minerals are core capabilities. Similarly, the knowledge and skills of experienced operating system developers who understand the Windows® operating system, integrations and architecture would be considered core capabilities at Microsoft because such knowledge and skills are in short supply.

3. Imitability: In order to be core, a capability must be difficult to imitate

If a capability can be rapidly or inexpensively duplicated, then it’s probably not core. Staying with our Microsoft example, Windows® developer knowledge and skills are difficult to imitate because it takes a long time to understand the entire Windows® architecture and all its integrations, thereby reinforcing their role as core capabilities at Microsoft.

4. Organization: A capability is core only if the organization has the ability to fully leverage it

Thus an organization must have the systems, processes and structures in place to leverage maximise value from a capability in order for it to be counted as core. In our Microsoft example, Windows® development capabilities are only considered core if the company has the means to recruit, develop, engage and maximize the productivity of employees with these capabilities.

Core capability examples

To make all of this more real, here are some potential core capabilities for some well-known companies:

A. Google: 

The ability to attract and monetize a large user base through free online products and digital advertising platforms (plus – perhaps a little unkindly according to WSJ – “minimizing government accountability with a smile”)

B. Facebook:

  1. Maximizing revenue per user by increasing connections and sharing between users
  2. Rapidly iterate and implement new features
  3. Employee autonomy
  4. Fending off competitors to its large client base

C. Amazon

  1. Customer-centric purchasing experience
  2. Distribution and logistics
  3. Leveraging a variety of technologies

D. Uber

  1. Network orchestration (as opposed to the capital intensive service provider model used by traditional transport companies)
  2. Dual business model delivering value to both of its markets (passengers and drivers)
  3. Scalable technology, easy-to-use, learnable interface (reduces the probability of switching providers)
  4. Algorithmic pricing based on demand-supply

Non-core capability examples

In contrast to core capabilities, non-core capabilities are knowledge, processes, technologies and structures which if imitated by your competitors would not excessively harm your market position. Thus Microsoft’s loss of many internal financial systems developers would harm Microsoft’s competitive position far less than losing a number of its Windows® developers.

In general, the further removed any capability is from the delivery of a company’s products and services, the more likely it is to be non-core. 

In many businesses, non-core capabilities might include, for example, back-office systems, facilities management and transaction processing systems.

2. Key principles for managing core and non-core capabilities

The first two key principles for managing core and non-core capabilities are particularly relevant to People Analytics and HR practices:

Key Principle 1

If a competitor gains access to your core capabilities, you’ve lost your competitive position no matter how well your intellectual property is protected. 

Key Principle 1: Guard core capabilities with your life, never lose control of them, never outsource them, never let them out of your sight.

Key Principle 2

The more time and money you invest in non-core capabilities, the less you will have to invest in your core capabilities. 

Key Principle 2: Minimize investment in non-core capabilities in order to maximize investment in core capabilities.

3. How do core capabilities relate to People Analytics?

To illustrate using our Microsoft example: Microsoft’s competitive position would probably suffer far more if its people process for recruiting its Windows® developers was disrupted than if its process for recruiting back-office developers was disrupted. This leads to another key principle:

Key Principle 3

  1. People processes, resources and technologies relating to core capabilities should also be considered core. 
  2. People processes, resources and technologies relating to non-core capabilities should also be considered non-core.

Shared processes

What about the common situation where both core and non-core capabilities share the same people processes, resources or technologies? For example, it is feasible that Microsoft may use the same process for recruiting both Windows® developers and back-office developers; or at the very least, the early stages of the recruitment process (for example, screening) may be the same for both roles until downstream where the process separates into role-specific tasks.

In such cases, sharing the same people process for both core and non-core capabilities violates Key Principles 1 & 2 because it means that:

  1. The non-core components of the processes cannot be outsourced without also outsourcing the core components (which violates Key Principle 1: Never outsource anything core)
  2. Minimizing investment in the non-core components of the process also means minimizing investment in the core components which violates Key Principle 2 (Minimize investment in anything non-core).

To address these violations, we introduce Key Principle 4:

Key Principle 4:

Any people processes, resources and technologies shared between core and non-core capabilities must be separated into core and non-core versions of the process.

4. Bringing it together: A strategy for prioritizing People Analytics investments

These four key principles can be combined to create a process for prioritizing People Analytics initiatives as follows (see Figure 1):

Figure 1: A strategy for prioritizing People Analytics investments

1. Identify your company’s core and non-core capabilities

If you don’t already know your organization’s core and non-core capabilities, obtain them from your company’s corporate strategy documentation. 

If they do not appear there, elicit them by interviewing appropriate executives. This is also a smart career move because not only will it raise your visibility in the executive suite but it may also help to improve your company’s competitive position by ensuring that these capabilities are included in the next round of corporate strategy thanks to your highlighting their absence.

2. Classify your people processes, resources and technologies as Core or Non-Core

Using your list of your corporate core and non-core capabilities, determine which of your people resources, processes and technologies are required to support the core capabilities. Call these your core people processes. Any people processes not required to support your company’s core capabilities are your non-core people processes.

Separate shared core & non-core people processes

Where people processes, resources or technologies are shared between core and non-core capabilities, make plans to separate these into core and non-core versions (to conform with Key Principle 4)

3. Lift-and-shift your non-core people processes

An efficient way to follow Key Principle 2 (minimise investment in non-core capabilities) is to outsource them, or at the very least devolve them to Shared Services or Global Business Services functions.

Should non-core people processes be reengineered before outsourcing?

A common question is whether non-core people processes should be made efficient before outsourcing them. The answer is usually no because our goal at this stage is not about improving non-core process efficiency: our goal is to minimise investment in non-core processes in order to maximise investment into core processes. 

To test this principle, ask yourself: Which will add more value: Improving our core processes or improving our non-core processes? 

Of course if your non-core people processes are particularly inefficient, then by all means come back to them after you’ve improved your core people processes as described in the next step.

4. Improve your core people processes

The resources and investment released by lifting-and-shifting your non-core people processes can now be deployed in the service of improving your core people processes.

Bear in mind that every improvement to your core people processes will increase your company’s competitive advantage because they support your core capabilities. 

While it might be tempting to ‘rip-and-replace’ your core people processes (i.e. rebuild them from scratch) rather than improve them incrementally, a rip-and-replace approach is risky because if something goes wrong with the people process as a result, you have damaged a core capability. A more conservative incremental improvement approach for core capabilities is therefore recommended.

Bear in mind that the objectives for improving your core people processes are to:

  1. Reduce waste
  2. Enhance quality
  3. Improve employee experience

Typical process improvement methodologies to consider, therefore, include:

A. Automation

E.g. RPA (such as UiPath, Blue Prism or Kryon), AI

B. Business Process Improvement

E.g. Lean, Agile, Six Sigma, TQM, Kaizan (5S), SIPOC and BPM

Summary

This article argues that the People Analytics priorities may differ from corporate priorities which will significantly reduce the impact of People Analytics as a function. 

To address this, the article has provided a process for aligning People Analytics with corporate strategy by prioritising people processes, resources and technologies based on a core capabilities model. 

This should result in a significant increase in the impact of People Analytics on the business.

Why #PeopleAnalytics should NOT be using regression to predict team outcomes

Did you know that regression (or any GLM model) assumes that the Y value observations (outcome or dependent variable) are independent? If they’re not, your coefficients can be inflated and your predictions inaccurate.

For example, say you’re trying to find out what drives performance, engagement or retention in your teams; in other words, your dependent variable is performance, engagement or retention.

Now ask yourself a question: are the engagement scores of employees within a team independent of each other? Of course not: people in a team influence each other. For example, if Lao is unhappy about some aspect of work and disengaged, there’s a reasonable chance that other members of that team will feel the same and therefore their scores are not independent as required by GLM regression.

Or consider another example: if a team leader’s behaviour affects one team member’s performance, chances are this behaviour is influencing performance across the whole team. Therefore the team members’ ratings are not independent and you are violating a key regression assumption which will result in inaccurate predictions and poor recommendations to the company.

I discovered this vital fact when researching the drivers of couple romantic relationship satisfaction (a team with n=2) and quickly learned that if one partner is dissatisfied with the relationship, there’s a good chance the other will be dissatisfied as well. Thus the scores are not independent and regression will deliver inaccurate predictions. How did I fix it?

The solution is to use a technique called Multilevel Modelling (MLM) which doesn’t assume that outcome variables are independent; it’s certainly the best way to get accurate results and make good recommendations if you’re working with team data. In fact, I won’t do any #PeopleAnalytics team data project without it.

MLM has been in the news a lot in the past few years because political analysts used it to successfully predict Trump’s election wins and Brexit. Before this, they’d been using regression and getting their forecasts wrong.

If you want to avoid the errors made by political forecasters and make accurate predictions and valid recommendations about what drives team performance in your organization, move away from GLM regression to MLM.

More information available on request.

People analytics: why some companies are making a fortune while others are losing out

People analytics – when implemented in the right way – can be a real fortune-maker for businesses. The problem is that a lot of ‘analytics’ styles out there will not reap the rewards that businesses expect. Explore the four styles outlined below to see if your analytics approach is missing a money-making trick…

One of the perks of being a consultant is that it allows you to compare a variety of business approaches used by different companies. Take people analytics for example: I’m constantly amazed at how some businesses make a fortune from their investments in this area, while others struggle to make ends meet and even to recoup their initial, potentially substantial, investments.

What separates winners and losers in the game of people analytics?

There are a variety of people analytics styles to choose from – and the approach a business decides to adopt can determine whether they fail or thrive in their implementation of people analytics.. The four I’m going to consider here are:

  1. Infrastructure Obsessives
  2. Reactive Data Waiters
  3. Data Miners
  4. Proactive Business Analysts

1. Infrastructure Obsessives

You know those people who never seem to get any work done because they spend all their time rewriting to-do lists and playing with new time-management methodologies? Infrastructure Obsessive people analytics functions do pretty much the same. Instead of just knuckling down and doing some people analytics, they spend most of their time (and money) on…

  • Governance: Setting up interminable governance structures for projects they’ll probably never run
  • Stakeholders: Meeting ‘empathetically’ with stakeholders whose business they don’t understand and with whom they’ll probably never engage with again
  • Data Privacy: Enriching their lawyers by planning for data privacy contingencies, most of which would require – in insurance parlance – 20 consecutive Acts of God in a 24-hour period
  • Data integration: Spending months (sometimes years) sucking useless data from all their global databases and spreadsheets into one place; then discovering that it’s mostly out of date; then cleaning it for a sum that even would make even Giorgio Armani blush; and then eventually using maybe only one tenth of the data
  • Technology: Evaluating technology after technology before finally investing an amount equivalent to the GDP of a small state on the ‘perfect platform’ – and again, only ever to use one tenth of its capability
  • Conferences: Attending endless conferences and then meeting with swarms of ever-hopeful consultants to discuss dozens of methodologies that they don’t even understand, let alone ever use
  • Consultancy: Spending more time with that high-end management consultancy than the average Fortune 500 company devising a people analytics vision, mission and objectives, which they’ll never end up implementing

And finally (but only if there’s any time left)…

  • People Analytics: Waxing on ad nauseum about the value they could generate if only the above activities left them some time to do some people analytics

In other words, Infrastructure Obsessives spend so much time and money building an infrastructure that there’s seldom any time left for doing any decent people analytics, let alone profiting from it; and, in some cases, their bosses eventually curtail their people analytics investments altogether because of the high resource consumption compared with the low returns.

Of course, no one is suggesting that infrastructure is a bad thing.But people analytics – like any organisational activity – is about balancing risk and reward. If you go overboard with risk, you never end up collecting a reward; nor will you make real money from your investments.

There are many possible reasons why people become Infrastructure Obsessives. For starters, they are often the victims of consultancies that emphasise people analytics infrastructure but who themselves lack the experience to deliver meaningful analytics.

Another cause of infrastructure obsession is CHROs whose people analytics skills are limited, but who realise that ‘one needs to be visible in people analytics nowadays’.

For these individuals, infrastructure obsession is a useful deflective measure because it makes them appear to be busy with people analytics while in reality they’re not budging out of their comfort zones.

Finally, infrastructure obsession is sometimes caused by an excessively risk-averse corporate culture. In these cases, people analytics is seldom the only function to be affected by this aversion to meaningful activity.

2. Reactive Data Waiters

Data Waiter people analytics functions start out similarly to Infrastructure Obsessives, but they at least make some money by actually using their infrastructures to do some ‘analytics’. Their issue is that the analytics they deliver are primarily just reactions to user requests for simple data reports which could be taken care of with a little self-service and end-user training.

However, even in this reactive scenario, Reactive Data Waiters could add some value by asking users why they need the data in the first place, rather than simply just handing it over. Typically, user responses are:

  1. “My manager asked me to get the report but I don’t know why she wants it”
  2. “We’re concerned about engagement, retention, absence, cost of recruitment.” In other words, they require the data to address a people process or workforce capability issue (see Levels 3 & 4 in figure 1 below).

In both of these cases, people analytics professionals could adopt a more value-added approach by trying to determine whether there is an associated business issue (of the kind found in Levels 1 & 2 of figure 1) on the basis that there’s not much point in trying to address Level 3 & 4 people issues if they aren’t causing any business problems.

A Data Waiter mentality is usually more commonly found in people analytics professionals who understand the HR language of Levels 3 & 4, but are uncomfortable with the business language of Levels 1 & 2. The fix is usually a commercial education programme.

While Reactive Data Waiter people analytics functions probably manage to keep the wolf from the door, there is a huge opportunity cost because they don’t come close to the potential of what might be achieved if they adopted a more proactive people analytics approach as described below.

Figure 1: Human Capital Value Profiler Framework

3. Data Miners

Data Miners focus on the technology components of people analytics infrastructure and – in particular – on bringing all their corporate data and any big data they can find into single virtual cloud. This is so that they can systematically trawl through it looking for spurious relationships (correlations), which may (or more likely may not) exist.

Sophisticated Data Miners use specialised tools such as RapidMiner or SAS, while amateurs bring to bear the skills they gained at that one day visualisation training course (you know the one) to use Tableau, Business Objects or Cognos to create a gallery’s worth of graphs with artistic imagination that even the Louvre would be proud of.

While visually compelling and imaginatively eye-catching, they add approximately zero value to their employer’s human capital decision-making process.

This is why professional data analysts refer to data mining as data fishing – because even if Data Miners do happen to stumble across significant relationships in their large data collections, they’re more likely to be meaningless coincidences rather than real Level 1 or 2 business issues.

To illustrate this point, here are some examples of ‘relationships’ discovered by Data Miners, which include gems such as employees who use Firefox or Chrome are better employees…

As far as I can tell, there are only two kinds of company that ever make money out of data mining:

  1. Companies that have solved all their business problems and happen to have spare data mining resources (there can’t be many of those around)
  2. Companies whose business model revolves around data and advertising such as Google and Facebook

4. Proactive HR Business Analysts

Finally we come to the rather small group of companies who get real value from their people analytics investments.They do this by proactively seeking out high-value Level 1 & 2 KPD and business opportunities (see Figure 1) before even thinking about building a people analytics infrastructure. In other words, they go straight for the money with Level 1 & 2 projects like:

  1. Productivity improvement: Identification and automation of low-level repetitive tasks based on optimisation modelling – determining their company’s workforce productivity drivers and then using employee-centric training to deliver these
  2. Enhancing customer depth and share of wallet: Scientific profiling and recruitment of high potential customer-facing personnel and salespeople; and again, the delivery of training based on analytically determined competencies
  3. Innovation enhancement: Application of people analytics to identify where their company’s corporate culture is inhibiting competitive advantage and then designing interventions to remedy these issues

This is not to say that Proactive HR Business Analysts don’t fulfil also a Data Waiter function providing simple user reports and data as needed: it’s just that they view reactive data-waitering as a people analytics hygiene factor rather than a core activity.

There are three other points worth noting about the Proactive HR Business Analysts:

First, they view Level 3 & 4 workforce capabilities and people practices as vehicles for improving their company’s KPDs and business outcomes, rather than as ends in themselves. In other words, they don’t seek to increase engagement or retention for its own sake unless there is a concrete business issue attached to it.

Second, Proactive HR Business Analysts focus primarily on predictive analytics rather than simple reporting. This makes sense because profitable people analytics is the result of predicting which problematic Level 3 & 4 people processes and workforce capabilities are causing missed Level 1 & 2 targets. Once identified, action can be taken to repair the faulty Level 3 & 4 predictors.

Finally, Proactive HR Business Analysts are agile: they operate on a shoestring by favouring rapid desired business outcomes over an exaggerated need for people analytics infrastructure. They then put some of the resulting income towards developing a people analytics structure.

In other words, each successful people analytics project funds the next stage of the infrastructure. This is not to say that they don’t also carefully review all the legal requirements as they go along. But it does mean that, in general, their technology and data integration costs are minimal.

The money makers are in the minority

In summary, the only companies making real money out of people analytics are those that proactively seek out business issues and use predictive analytics to address them.

These people analytics functions have an intimate understanding of the trade-off between investments in people analytics infrastructure and proactive action and are experts at balancing this risk in order to earn a handsome dividend.

People Analytics and the Scientific Method: Part I

Part I: Introduction

Overview

This article is the first in a three-part series explaining how the scientific method can be used to increase the effectiveness and profitability of people analytics in corporate environments. Part I offers a rationale for using the scientific method. Part II explains options for its deployment and Part III compares it to alternative people analytics practices.

Scientific Decision-Making in Organizations

Owing to the significant social and health risks associated with the release of a new drug, pharmaceutical companies are legally required to deploy a process – known as a clinical trial – to demonstrate, for example, that the proposed drug is fit for purpose and that it does not result in unmanageable side effects. Clinical trials, in turn, are based on the scientific method to establish causality beyond reasonable doubt (Figure 1). Many of today’s business analytics practices are in fact drawn from the scientific method.

However, the scientific method is not merely confined to manufacturing and R&D functions in business: marketing, procurement and finance functions have also benefited from its use for at least the last 20 years. For example, prior to investing in a campaign, analytical marketers use A/B testing – a technique based on the scientific method – to demonstrate which marketing campaign will deliver the greatest return on investment (Figure 2).

Yet when it comes to people analytics, only a few companies – like Unilever for example – use the scientific method to guide their investments in people programs (see Figure 3).

This is remarkable because many companies that are using the scientific method to improve their marketing, procurement and R&D decision-making choose not to use it when it comes to guiding their people investments; despite the fact they spend more on people than they do on marketing, procurement and R&D combined. Instead, these companies focus on entirely low-level people analytics techniques like HR reporting, visualization and dashboards to deliver their people analytics results.

Why are these companies not using the scientific method? There are at least two possible reasons. Firstly, many lower level people analytics approaches like HR reporting, dashboards, and visualization can be learned at post-conference workshops or on one-week courses. In contrast, the scientific method requires a significantly higher educational investment, typically a postgraduate research qualification. It may be that some HR functions, unlike their marketing and R&D counterparts, are not willing to make this investment.

Secondly, the scientific method is not widely publicized since it is not in the interests of technology companies to do so. This is because the scientific method requires significantly less technology to be effective than lower-level analytic approaches which rely on vast quantities of data and technology upon which to display it (rather than analyse it). Technology companies are therefore unlikely to promote high-level analytics techniques which would reduce their sales revenues. The result is that technology companies have “trained” the market to think about people analytics as a technology-driven discipline.

There is, of course, a place for lower-level analytics like dashboards, visualization and HR reporting in people analytics. In fact, they are essential for statutory reporting and business problem identification. Unlike the scientific method, however, they cannot establish causal links between people processes and desired business outcomes; nor can they be used to identify those people processes which require modification to enable the business to achieve its business objectives.

The result is that companies which focus purely on low-level people analytics are likely to be wasting a significant proportion of their human capital investments. The phrase low-level analytics deliver low-level returns has never been truer and is undoubtedly leading to Gartner’s Trough of Disillusionment in the people analytics industry.

The next article in this series will describe a methodology for implementing the scientific method.

People analytics – The 2016 bottom line

“Talent is now the most scarce and valuable commodity on earth, so companies who really understand how to attract, retain, and manage people will win”.

– Patrick Coolen, 2015

HR analytics is more than simply data mining on employee efficiency. Beyond this it ultimately aims to provide tangible insight into the processes that define the day-to-day business operations – the pre-set actions taken that ultimately lead to the pence and pound on the bottom line. At its heart, it is about improving strategy and processes.

For many, the challenge that threatens the firm footing that HR analytics may provide, comes in the form of knowing what data to capture, and what data model to harness. Get either or both wrong and the envisaged optimal return on human capital is going to go unrealised. Here’s exactly what is entailed in this all-encompassing process – and here’s why corporations should be investing in what is analytics around their most invaluable resource.

Analytics achieves some great results – so why do some refer to a “gloomy landscape”?

Before I dive into the details of people analytics, I want to outline what some refer to as the “gloomy landscape” that is the current outlook.

Deloitte research suggests only 4% of HR departments have any form of predictive analytics in place, whilst more than 60% are still grappling with a mass (and when I say mass, read mess) of systems, even to get the most lacklustre of reports.

So, just what do I mean by lacklustre? Well that would be the fact that, for the most part, this 60% struggle to know how many people are on payroll on any given day, whilst most also fail to track hourly workers.

Read more about this – Josh Bersin – Get Started with Talent Analytics

This is all despite people analytics delivering staggering results, as I’m just about to show you…

So, just what can be realised, when you get people analytics right?

One USA healthcare organisation achieved $100 million in savings, whilst also securing the Holy Grail in relation to its workforce – more engaged employees.

Bon-Ton identified attributes that made cosmetic reps successful – something that has led to an increase in sales per rep of $1,400, whilst driving down employee churn by 25%.

Wells Fargo, with predictive model in hand, has been able to select the most suited of candidates for teller and personal banker positions – this program has, within 12 months, achieved an increased retention rate in tellers and personal bankers of 15% and 12% respectively.

And one company (unnamed under NDA) reduced its retention bonuses by $20 million – as well as its employee attrition by 50% – all thanks to predictive behavioural analysis.

An appropriate analytics solution – The crux of the matter

Choosing the right analytics solution comes down to this, and this alone:

Businesses must know, upfront, what outcomes are sought from the tools used.

From this strategic standing start, a process as mapped out by Giles Slinger (2015) may be followed:

1.          Get the questions clear

2.          Get your data clean

3.          Understand the ‘as-is’

4.          Design the ‘to-be’

5.          Understand the organisation as a system

Read more about this process – Strategic HR Review – Emerald Insight Whitepaper

HR analytics solutions: The options that lie before you

– The cloud based HR system – providing powerful integrated features pre-built and ready to go (such as Oracle, SAP, Rosslyn Analytics, ADP, IBM, Ultimate Software, Saba, Skillsoft, CornerstoneOnDemand, Workday).

– Smaller, creative providers – for more customised, targeted solutions (such as HiQLabs or iNostix).

– Purpose built data analytics systems – for systems designed completely around reporting and analytics (such as Google Analytics, Hadoop [IBM, HP, Microsoft, and Intel], Amazon, and Teradata).

HR analytics benchmarking: the latest thinking

“I shall try not to use statistics as a drunken man uses lamp-posts, for support rather than for illumination.”

– Andrew Lang, Scottish Novelist, 1937

Sage words indeed, and one that covers many of the most recent notions to emerge from HR analytics. These thoughts include: a focus on data that is combined with external data (social profiles, employee job history and more); analytics that is combined with the oft forgot gut feeling; decisions built upon more than purely data – for the protection of ethics; and analytics that is less about data crunching for the sake of it, and more about answering a pre-defined question or challenge.

These strands of thought continue to evolve, yet one of the strongest opinions in this realm is that Human Capital Analytics, doesn’t belong in HR at all. It’s for this reason that this branch of analytics is better called “People Analytics”. When we consider that this realm of data science can relate to sales, productivity, turnover, retention, accidents, fraud, and even the people-issues that drive customer retention and customer satisfaction, it’s clear that the expansive goals of People Analytics, span far beyond HR.

Ultimately, it is argued, removing People Analytics from HR can allow for better cohesion between the core departments that are responsible for these goals and objectives – finance, operations, and sales – for a truly end-to-end approach.

A matter of metrics

People analytics is far from a ‘one glove fits all’ task – organisations must tailor their approach and, of pivotal importance, benchmark their metrics. In layman’s terms this ensures that their analytics is achieving ‘bang for their buck’. That said, here are the most commonly harnessed of all metrics when it comes to people analytics:

– Staff advocacy score

– Employee engagement level

– Absenteeism Bradford Factor

– Human capital value added (HCVA)

– 360-degree feedback score

Read more about these metrics – Bernard Marr – Five People Metrics Everyone Should Know

Not sure where to even begin?

If you’re unsure as to where to begin, it may be wise to start with recruitment – the realm of people analytics where the highest revenue growth and profit margin impact can be found.

“Recruitment is a good place to start, because if you can start to provide some predictive analytics about when and where we’re going to need people with whatever capabilities – way before people actually leave or we have vacancies, then that has a really significant impact on an organisation because there’s now downtime especially in critical roles”

– Andrew Lafontaine, senior director, HCM transformation, Oracle APAC

How well are you (really) managing human capital?

Let’s put some tangible numbers to the process – work out your human capital expertise by any one of the ten following suggested efficiency ratios:

1.)        Revenue Factor = Revenue / Total Full Time Employees

2.)        Voluntary Separation Rate = Voluntary Separations / Headcount

3.)        Human Capital Value Added = (Revenue – Operating Expense – Compensation & Benefit Cost) / Total Full Time Employees

4.)        Human Capital Return on Investment = (Revenue – Operating Expenses – Compensation & Benefit Cost) / Compensation & Benefit Cost

5.)        Total Compensation Revenue Ratio = Compensation & Benefit Cost / Revenue

6.)        Labour Cost Revenue Ratio = (Compensation & Benefit Cost + Other Personnel Cost) / Revenue

7.)        Training Investment Factor = Total Training Cost / Headcount

8.)        Cost per Hire = (Advertising + Agency Fees + Recruiter’s Salary/Benefits + Relocation + Other Expenses) / Operating Expenses

9.)        Health Care Costs per Employee = Total Health Care Costs / Total Employees

10.)      Turnover Costs = Termination Costs + Hiring Costs + Training Costs + Other Costs

Read the complete piece from Matt Evans – Metrics for Human Resource Management

People Analytics: Think you have what it takes?

People analytics is a demanding and complex role – and one that commands a wide variety of skill sets. Here’s a rundown of just what it takes:

– A degree and experience in mathematics, statistics, computer science and engineering

– Strong ability in Excel (such as macros, dashboards and pivot tables)

– Commercial acumen, analytical and interpretation skills

– Relationship building, influencing, intuition, and timing

– Clear storytelling skills (Ulrich and Rasmussen)

Looking toward the future

The future for companies that embrace people analytics is unequivocally bright – and already there is an increasing uptake in hiring for HR analytics professionals. As the axis of the world of commerce turns, we’re increasingly becoming an ever more knowledge-based economy. The result of which will be increased demand for ever more specific skillsets in the people analytics realm.

Yet the divide between this, and the current lacklustre performance amongst corporations is a staggering one – and one that is often compounded by people analytics incentives that start first with the data, rather than the business challenge, objective or goal. Just how the outlook of today, and the promise of the future, is to be bridged will be an interesting story to see unfold.

The Future of Human Resources in the Age of Automation

By Max Blumberg, PhD

The full version of this article will be published in Winmark’s excellent C-Suite Report following my session there in November:

  1. Report: http://www.winmarkglobal.com/c-suite-report.html
  2. Session: http://www.winmarkglobal.com/wm/ViewEvents?function=Upcoming&network=SHR

Intro

We’ve all seen films where Artificial Intelligence replaces humans on-mass – and much debate swirls around just how much of this could ever be reality.

automation

Meet Erica – perhaps the world’s most advanced, human like robot yet. She demonstrates that we may not be too far away from silver-screen-like AI workers.

https://www.youtube.com/watch?v=MaTfzYDZG8c

Yet when we talk of automation and robotics, we really shouldn’t be looking to tomorrow. Revolution has actually already occurred – and automation is, or at least, should be transforming the world of HR.

Putting it into context

Let’s consider a few statistics…

45 percent of work activities could be automated using already demonstrated technology.

Yet fewer than 5 percent of occupations can be entirely automated using current technology. However, about 60 percent of occupations could have 30 percent or more of their core activities automated.

The effects of this are already being felt.

Last year, Barclays announced an immediate future where 30,000 banking employees would lose their jobs to automation.

Whilst more recently, Apple’s manufacturer, Foxconn, replaced 60,000 factory workers with robots.

So, what does all of this mean for the world of HR?

In short, these drastic job market changes demand that HR professionals brace themselves. Here is what may be ahead when it comes to your roles…

Corporate culture: Just what does a culture where AI, robotics and humans intermingle look like? This is a particularly relevant question given the presumed hostility that many a worker may exhibit, given the increasing robotic replacements besides them.

Performance management: Performance based management – comparing one employee to the next, is the traditional means of assessing workforce progression.

As robotics increasingly enter our workforces, just how can a scale be defined between the two? This may well be a question that you’ll have to answer.

Employee relations: Technology has already disrupted markets – take Uber as the perfect example of a tidal wave of now unhappy workers take action against automation. The question as to how you harness advancement, whilst handling disgruntled employees may well prove a relevant one sooner, rather than later.

Motivation and rewards: Some experts reason that robotics will drive down wages – if this is realised then you’ll need to re-think your reward system. Just how can workers remain motivated when working alongside the automated tech that has taken bread off of their table?

Finally, some schools of thought might argue that you’ll potentially look after far fewer staff – whilst others contend that you’ll face departments that are in-flux as automation may empower productivity and staffing levels in other areas.

Are you sitting comfortably?

Perhaps the HR department itself is not immune to a certain level of replacement – as existing technology is already boasting advanced capabilities involved with Payroll, scheduling and Benefits arrangements. Yet until recently, such automation really only represented various tools – rather than a single robot that looms to threaten your job role.

Now, however, there’s Talla – a desktop chat bot that’s in the final stages of being prepared for office life – taking on tasks such as on-boarding, 24/7 workforce support and Tier 0 and Tier 1 support. And version two is already in the making.

Author

Max Blumberg, PhD is Founder of the Blumberg Partnership workforce and sales force analytics consultancy and Visiting Researcher at Goldsmiths, University of London.

Avoiding People Analytics Project Failure

Contents

1. Introduction

2. The Four-Block People Analytics Model

3. Sources of People Data

4. How to create Robust People Data Sets with Strong Correlations

  • People Metrics Definition Process
  • The People Metrics Definition Workshop for Operational Managers
  • Restricted Range, Babies & Bathwater
  • Technical Reasons Why Data May Not Hang Together

5. Take Away

1. Introduction

Making sense of people data is a struggle for many HR professionals. People analytics is only effective when data collection is focused on achieving a particular management objective – such as improving talent management processes, such as recruitment or retention, or to demonstrate HR’s contribution to the value/ROI of these processes. Despite this core concept of people analytics, many companies simply analyse the data nearest to hand – with the results being anything but insightful. Ultimately ad hoc data analysis invariably ends in project failure – delivering only a wasted budget and a belief that people analytics is just hype.

As most technical analysts will tell you, people analytics project failure usually boils down to just one thing: it simply means that hardly any significant correlations could be found in the data.

This article will help you harness people analytics, and avoid project failure, by presenting a systematic, cost-effective methodology for creating robust data sets that correlate. We will be focussing on two tools: the People Metrics Definition Process and People Metrics Definition Workshop for Operational Managers.

2. The Four-Block People Analytics Model

The People Metrics Definition Process methodology holds the premise that the primary – and perhaps only – reason for investing in people programmes – such as recruitment, development, succession planning, and compensation –  is to deliver the workforce competencies required to drive the employee performance needed to achieve specific organisational objectives. Graphically, this can be expressed as follows:

People Programmes Workforce Competencies Employee Performance Organisational Objectives

If any link in this chain – the Four-Block People Analytics Model – is broken, it means that investments in people programs are not delivering the organisational objectives aimed for.

The strength of a link between any two blocks in the model is referred to as the statistical correlation. When two blocks are correlated, a change in the values of one block can be predicted from a change in the values of the other. Let’s put this into a real world example – a training programme improves employees’ competency scores, which in turn results in a predictable, corresponding increase in employee performance ratings. This would show that competencies and employee performance are correlated. However, where there is a poor correlation between competencies and employee performance, then training programmes which increase competency scores will not result in increased employee performance. From a business perspective, this means that the training spend was a wasted investment.

3. Sources of People Data

The ways that you obtain the data used in each block is essential for establishing correlations.

1. Data sources for organisational objectives

Organisational objectives data reflects the extent to which business objectives are being achieved. This data is often expressed in financial terms, although there is an increasing drive towards the inclusion of cultural and environmental measures. A common and critical pitfall to avoid here is to consider workforce objectives (such as retention or engagement), rather than organisational objectives.

2. Data sources for employee Performance

Employee performance data is typically generated by managers in the form of a multidimensional ratings obtained during performance reviews. An employee performance rating should simply reflect the employee’s potential to contribute to organisational objectives. Note that the term potential is used deliberately to emphasise that employees who do not fully contribute to organisational objectives today, may do so in the future if they are properly trained and developed (assuming that they can be retained). A common error here is confusing employee performance measures with competency measures, which we define next.

3. Data sources for Competency

Competencies are observable employee behaviours hypothesized to drive the performance required to deliver organisational objectives. The word “hypothesized” is used to emphasise that the only way of knowing whether the company is investing in the right competencies is to measure their correlation with employee performance. If the correlation is low, it would be reasonable to assume that the company is working with the wrong competencies (or that there is a problem with performance ratings).

There are three problems usually associated with competency data. First, competency ratings are often based on generic organisation-wide competency frameworks. The resulting competencies are therefore typically so general as to be useless for any specific role.

Second, competency frameworks are often created by external consultancies lacking full insight into the real competencies required to drive employee performance in a particular sector and organisational culture. The only way to create robust competency frameworks is to obtain them from the operational personnel to whom the competencies apply. The People Metrics Definition Workshop for Operational Managers (which we explore later) will achieve this.

Finally, many companies confuse employee competencies with employee performance – presenting competencies as part of an employee’s performance rating. As noted above, competencies are merely hypothesized predictors of employee performance. They are not stand alone measures of employee performance. A real word example would be where good communication competency helps a salesperson to sell more, however a high communication competency score will not make up for missing a sales target.

4. Data sources for People Programmes

Programme data usually reflects the efficiency (as opposed to effectiveness) of talent management programmes such as the length of time it takes to fill a job role, the cost of delivering a training program, and so on. Programme data is usually sourced via the owner of the relevant people process.

For further ideas about people data measurement, visit www.valuingyourtalent.com

4. How to create robust people data sets with strong correlations

Here are four remedies for creating a Four-Block People Analytics model that actually correlates:

1. People Metrics Definition Process

The most common reason for poor correlations is using data not specifically generated with a defined purpose in mind. This is like trying to cook a sticky-rice stuffed duck without buying rice or a duck – it’s simply not going to end well.

The best way to cook up a successful people analytics project is to use a People Metrics Definition Process. This starts with the end in mind (namely by first defining the business objectives data) and then working backwards through the Four-Block People Analytics Model:

1. Organisational objective

First ask: “What organisational objective(s) need to be addressed?” Where possible, choose high profile objectives such as those which appear in the annual report (such as revenues, costs, productivity, environmental impact, and so on). Then narrow down this list to those metrics which, for example, reflect targets that are being missed. This approach ensures your people analytics has relevance.

2. Employee performance

Now consider how the employee performance required to achieve these objectives will be measured. This is discussed under section three – Restricted Range, Babies and Bathwater.

3. Competencies

Next define the competencies likely to be needed to drive this employee performance. Note that global competencies usually exhibit far lower correlations than role-specific competencies. Competency definition is discussed below, under the heading The People Metrics Definition Workshop for Operational Managers.

4. People Programmes

Finally consider the kinds of people programmes that will be required to deliver these competencies and also how to measure the efficiency of these programmes. Bear in mind that a people programme is only as effective as the competency metrics that it produces.

Companies performing the above steps in any other order should not be surprised if they end up with poor correlations between their people programmes, competencies, employee performance and organisational objectives.

2. The People Metrics Definition Workshop for Operational Managers: Avoiding the Talent Management Lottery

Probably the second most common reason for poor correlations in the Four-Block People Analytics model is the use of inconsistent employee performance data. Inconsistent performance data is usually the result of managers not knowing what good performance looks like in their teams. This means that the company lacks an analytical basis for distinguishing between its high and low performers which turns the allocation of development, compensation and succession expenditures into a talent lottery rather than an analytically-based process.

By far the best (and easiest) way of transforming a performance management lottery into an analytical programme is the People Metrics Definition Workshop for Operational Managers.

This is simply a facilitated meeting for operational managers, where operational managers are guided through the People Metrics Definition Process. The key deliverables are:

  1. A set of people programme, competency, employee performance and organisational objectives metrics
  2. Increased engagement between operational managers and the data that they will be using to manage their teams. You simply cannot get operational managers to engage with HR processes and data if the model’s metrics definitions are provided by non-operational parties, such as external consultants or HR. Even if these external metrics are of high quality, operational managers will still tend to treat them as tick-box exercises because they do not believe (probably correctly) that non-operational parties can truly understand the business and its culture as well as they do.

The role of HR and/or external facilitators in the People Metrics Definition workshop is therefore not to provide content but to expertly facilitate the gathering of people metrics and helping managers reach a consensus.

When it comes down to the crunch, this workshop ultimately has an enormously positive impact on correlations within the Four-Block People Analytics model.

3. Restricted Range, Babies and Bathwater

Another problem that comes from not properly distinguishing between high and low performing employees is known as Restricted Range. Restricted range means that team member performance ratings tend to be clustered around the middle rather than using the full performance rating range. For example, the graph below shows a typical team performance distribution of a company using a 1 (poor performance) to 6 (high performance) rating scale. Note the number of ratings clustered around 4 and 5 instead of using the full 1 – 6 range:

Flat performance ratings

There are many possible reasons for restricted range. Sometimes it’s because managers simply do not know what good performance looks like as discussed above. Another common cause is that in order to maintain team engagement and unity, they avoid low scores; on the flip side, they may avoid high scores so as to avoid feelings of favouritism.

Restricted range carries two serious implications:

  1. Restricted range not only restricts employee performance ratings, by definition, it also seriously restricts the possibility of decent Four-Block People Analytics Model correlations
  2. If everyone in a team has similar ratings, then managers must be using some other basis – some other scale even – for making promotions and salary decisions. Secret scales cannot be good for team morale or guiding employee development, compensation and succession planning investments.

Addressing restricted range is usually a cultural issue with causes that must be carefully understood before attempting intervention. One remedy usually involves explaining to managers that more differentiation between their high and low performers will result in the right team members getting the right development which in turn will result in higher team performance for the manager.

Perhaps this is the time to raise the thorny topic of employee ranking – such is the bad press of this, that employee rating/ranking is supposedly no longer used at companies such as Microsoft, GE and the Big 4 consultancies.

A little reflection reveals that avoiding employee performance ranking is a case of “throwing the baby out with the bathwater”, because if employees are really no longer measured, then on what basis are promotions, salary increases and development investments made? Presumably it means that the “real” employee performance measures have been pushed underground into secret management meetings and agendas where favouritism and discrimination cannot be detected. This cannot be good for employer brand.

If additional proof is required that employee ranking will always exist, consider what would happen in the event of a serious downturn and these companies were forced to lay off employees as was recently the case with Yahoo! If employees are not ranked in some way, then any layoffs would appear to be random and are a return to the talent management lottery scenario.

It must therefore be reasonable to assume that no matter what a company’s public relations department says, employee performance ranking still exists everywhere even if it has been temporarily pushed underground in the past few years. If even more proof is required, confidentially ask an operational manager, with whom you have a trusted relationship, who in their team has the greatest and least performance potential. Next ask them whose performance potential lies somewhere near the middle. What you’re doing here is in fact is getting the manager to articulate the “secret” scale that they use for salary increases and promotions.

An important role of people analytics professionals – and HR in general – is to contribute positively to an organisational culture where such scales are part of the analytics mainstream and not hidden in secret meetings. This ranking approach can be extended – and is indeed already used by many companies – by asking multiple managers to rank the employees and/or using a team 360.

The above ranking process will also prove useful to companies wishing to avoid legal accusations of random dismissal and layoff; Yahoo! for example would have benefited from this approach.

4. Technical reasons why data may not hang together

Finally, there are some technical statistical reasons why the Four-Block People Analytics Model data may not correlate, such as:

  • The data set may not be large enough (e.g. you need a lot of data for proper analysis)
  • If you’re using non-machine learning/parametric techniques, the data may not be sufficiently normally distributed and/or may not be linear. This is another good reason for companies to consider migrating to the use of machine learning techniques.

5. Take Away

Poor correlations in the Four-Block People Analytics Model are a stark reminder that people analytics data needs to be collected with specific business objective outcomes in mind. Using any other form of data ultimately results only in wasted time and resources. This approach must be one that involves operational managers – who are each critical to the defining of metrics to be used. Only then can people analytics truly deliver on all that it promises.

 

Please do share your comments, thoughts, successes and failures with me below or at sig-2016-09-18_170344.

©Blumberg Partnership 2016

With thanks to Tracey Smith of Numerical Insights who kindly reviewed this article.

 

People analytics: how much should you spend on technology?

savemoney-400x250

Can significant investment in expensive people analytics technology ever be justified?

Compare people analytics to marketing analytics: in the case of marketing analytics, the case for expensive technology is reasonably clear because sample sizes have orders of magnitude in the hundreds of thousands. On that scale, high-tech data fishing will usually pay for itself with the discovery of a few new profitable market segmentations.

In the case of people analytics, however, few global workforces are sufficiently large to justify these expensive fishing expeditions in the hope of finding valuable workforce patterns. And even if patterns are found, the ROI would be virtually swallowed up by the cost of the technology required to generate it.

Experience suggests that when it comes to people analytics, optimal ROIs are generated not by expensive data fishing, but by focusing on specific well-defined problems identified by the business. The solutions to most of these problems do not require significant technology investments and can instead be solved with low-cost packages like SPSS and cost-effective cloud technologies. In truth, we’ve even helped companies save $10m using just Excel as part of a well-planned structured data methodology. I’ll post something on people data strategies in the next few days.

The bottom line is that ROIs on people analytics could be even higher if companies avoid unnecessary capital outlays on excessive technology.

 

Living with a statistician

‘You haven’t told me yet,’ said Lady Nuttal, ‘what it is your fiancé does for a living.’

‘He’s a statistician,’ replied Lamia, with an annoying sense of being on the defensive.

Lady Nuttal was obviously taken aback. It had not occurred to her that statisticians entered into normal social relationships. The species, she would have surmised, was perpetuated in some collateral manner, like mules.

‘But Aunt Sara, it’s a very interesting profession,’ said Lamia warmly.

‘I don’t doubt it,’ said her aunt, who obviously doubted it very much. ‘To express anything important in mere figures is so plainly impossible that there must be endless scope for well-paid advice on how to do it. But don’t you think that life with a statistician would be rather, shall we say, humdrum?’

Lamia was silent. She felt reluctant to discuss the surprising depth of emotional possibility which she had discovered below Edward’s numerical veneer.

‘It’s not the figures themselves,’ she said finally. ‘It’s what you do with them that matters.’

(From The Undoing of Lamia Gurdleneck, by K.A.C. Manderville, also known anagramatically as the famous statistician Sir Maurice G. Kendall, 1907–1983, and found at the start of his seminal book Advanced Theory of Statistics, Volume 2, co-authored by Alan Stuart.)

Do Competency Frameworks Work in Real-World Organisations?

Introduction

“Do Competency Frameworks Work in Real-World Organisations?”

This question about competency frameworks, psychometric tests and 360° surveys etc. is regularly posed in various forms on LinkedIn analytics groups, and inevitably generates a lot of debate. I’d like to discuss it here in the context of the phrase ‘real-world’ via a case-study.

The term “real world” reflects the perception of many senior and operational managers that while competency frameworks and psychometric tests may be effective in the controlled academic environments in which they are created and developed, they may be less effective in their ‘real-world’ organisations. Could there be any truth to their concerns?

Analysts and businesspeople define “effective” differently

To answer this, we need to delve more deeply into the meaning of the term “effective” in this context. Without getting overly academic, analytics practitioners usually consider a framework to be “effective” if it has sufficiently high construct validity in that it accurately measures the construct it claims to measure such as competency, ability, or personality say. Now contrast this with operational managers for whom the term “effective” usually refers to the predictive (or concurrent) validity of the measure; that is, the extent to which organisational investments in employee frameworks predict desirable employee outcomes such as job performance, retention and ultimately profitability.

And herein lies the rub: just because a framework accurately measures the construct it purports to measure (e.g. competency or ability), this construct validity does not mean that the instrument predicts important employee outcomes. It will therefore not necessarily deliver metrics that can be used as useful inputs to create talent programmes specifically designed to maximise performance and retention of high potentials.

How to establish the validity of a framework

So how do you establish the validity of a competency framework or 360° survey etc.?

  1. You first test concurrent validity on a representative sample of your employees to determine which competencies correlate with desired employee outcomes in the ‘real world’ i.e. in your organisation (as opposed to wherever the vendor claims to have tested it).
  2. You then refine the framework based on what you learn from these correlations.
  3. If the potential cost of framework failure is particularly high, you deploy a methodology to test causality/predictive validity.

A case study

Sadly, few organisations do such testing before procuring frameworks which can lead to unfortunate results. As an illustration of this, we recently examined trait and learnable competencies data for 250 management employees in the same role at a well-known global blue chip brand, together with their job performance scores (Fig 1).

Mediation case

We modelled them as:

  • Psychometric scores: Fixed characteristics unlikely to change over time and therefore potentially useful for selection if they correlated with performance
  • Competencies: Learnable behaviours as assessed by the company’s competency framework, 360° survey and development programme assessors. Useful for developing high performance.
  • Employee Performance: We were fortunate to obtain financial performance data for each manager which means these scales were reasonably objective. Ultimately, the reason for investing in psychometrics and competencies for selection and development is to achieve high performance here.

We found the following relationships in the data:

  • Psychometric and competency scores: As can be seen, only 47% of psychometric test scales correlated with learnable competencies. Effect sizes were small, typically less than 0.20 meaning that they would not be helpful for selecting job candidates likely to
    exhibit the competencies valued by the organisation. One psychometric test, however, did have a large correlation with performance; unfortunately this correlation was negative meaning that the higher the job candidates’ test scores, the lower their competency scores – hardly what the company intended when it bought this test.
  • Psychometric scores and Employee Performance: Only 7% of the psychometric test scales correlated with employee performance and again effect sizes were typically less than 0.20 – meaning that the psychometric tests were not useful for selection. Again, one test shared a negative relationship with performance: this means that if high scores were used for candidate selection, it is likely to select low performers.
  • Competencies and Employee Performance: Only 12% of the learnable competency scales correlated with performance – and again, effect sizes were small meaning that investing in these competencies to drive development programmes was not worthwhile. And once more, some of these competencies had negative correlations with performance, meaning that developing them might actually decrease employee performance.

Conclusion

So, to go back to the original question: Do competency frameworks and psychometrics work in real-world organisations? One could hardly blame managers in the above organisation for being somewhat sceptical. This scenario is not unusual and is typical of the results we find when auditing the effectiveness of companies’ investments into competency frameworks and other employee performance measures.

But does it have to be this way? I believe that the answer to this is no, and that frameworks can not only work, but can significantly improve performance in the ‘real world’.

All that is needed is some upfront analysis to obtain data such as I have outlined above – as that is what is needed to determine whether performance is being properly measured as well as which scales are and are not useful (so that poor scales can be removed and if necessary replaced with better predictors/correlates). Typically, we see employee performance improvements of 20% to 40% by simply following this process.

So the answer to the original question is ultimately a qualified “yes” – competency and employee frameworks can work provided that systematic rigour is used in selecting instruments for ‘real world’ applications.

How to get value from competency frameworks and psychometric tests

  1. Frameworks cannot predict high performance if you don’t know what good looks like. The first step in procuring frameworks is therefore to ensure that “high performance” is clearly defined and measurable in relation to your roles. And if you want to appeal to your customers – operational managers – use operational performance outcomes.
  2. Before investing in a framework that will potentially have a negative effect on your workforces’ performance, ensure you get an independent analytics firm’s objective evaluation of each scale’s concurrent and/or predictive capabilities in respect of employee populations similar to your own. If the vendor you’re considering purchasing from cannot provide independent evidence, request a discount or get expert advice of your own before purchasing anything.
  3. If your workforces are larger than around 250 employees, you should certainly get expert advice to ensure that a proposed framework does indeed correlate with or predict performance in your own organisation before making a purchase.
  4. If you need professional help interpreting quantitative evidence provided by vendors, get it from an independent analyst – never from a statistician working for the vendor trying to sell you the framework.