Future of HR is ‘No-HR’ yet more ‘Human’. Can Analytics and Machine Learning support HR functions and create a better HR function? If we want HR to be more ‘Human’ then why are we talking about Machine Learning?
The role of HR in many organisations is often seen as that of a paper pusher, managing the ritual hiring, onboarding, appraisals and standardised training. Quite often these tasks adhere to a set template which in turn makes them merely compliance-oriented rather than improving the effectiveness of an employee. Compliance and effectiveness are at best weakly correlated, yet they are frequently assumed to be strongly causal. Consequently, the impact of these tasks is limited on an individual’s performance and growth in an organization.
The above nature of the HR function is predominantly because of two factors. One is the organisational leadership or departmental leadership, which in many case, see this function as one that must exist because and ‘it has to be like that’ based on the tight coupling of legacy systems and traditional operational methods. The second one is the lack of objective decision making in organisation culture. However, these factors are easily addressable by incorporating HR Analytics and use of Machine Learning to enhance objective reasoning and decision making around an employee and respective job functions.
One of the positive offshoots of the long-time existence of HR existence is the creation of systems for managing employee data. While, the maturity and the complexity of these systems vary, a basic system of data collection is often available. The availability of this system and the corresponding database provides a huge opportunity for creation of data analytics solutions to support the key functions of hiring, engaging and developing the workforce.
Here, we share our experience of developing a proprietary data analytics model for a leading real estate Group. This Group is acknowledged for creating state-of-the-art buildings in premium locations and is in the business of developing residential and commercial projects spread across approximately 1.5 million square meters. It was saddled with the problem of high attrition and basic analysis indicated outflow to be high within 6 months of hiring and more pronounced in the 2-6 years of experience band across all functions. It seemed to be obvious that the hiring process was to be analysed and corrected so as to identify the right set of candidates. We first had to define what is a “right” candidate. To answer this, we delved into existing organisational processes and their mapping with individual tasks and job functions. There was a standardized process of managing the appraisals; hence we had an objective metric to identify the measure of “rightness” of an employee. Thus, we conceptualized a metric of Employee Success Propensity (ESP). ESP can be defined as a metric that showcases the probability of the success of a candidate based on details in the resume and certain organisational characteristics. The success probability is based on established measures of employee performance at the workplace. The ESP score is designed to provide an objective reference to screen and hire candidates better.
Once the metric and outcome were defined, we adopted a five-step process for model development.
- Sample Data and Test Data Creation: The group had a 50-year-old legacy record keeping which helped us identify consistent sets of data fields across the years. In a large amount of data, we found outliers that had to be eliminated. With most data being on scattered and unstructured systems, there was a huge level of cleaning up and data consolidation required.
- Individual & Organizational Variables: We identified sets of variables that might impact the success of an employee – At an individual level, parameters like Age, Cumulative Experience, Time in organization, Educational Background and at an organizational level, parameters like Job function, Compensation, Team size, Manager behavioural competencies, Designation, and Promotion history. This is not an exhaustive list but an indication of variables used for model creation and later for model training.
- Creation of Variables for Model Development: Each variable was then evaluated, tested for correlation, checked for distribution, and the high impact variables were then shortlisted. There were multiple iterations at this step to make the model robust and back-testing performed for validation.
- Build Univariates: The model type used was the Binomial logistic regression model and the outcome that we were looking for was the prediction of our event occurrence i.e. possibility of high performance.
- Checks & Balances: The model was back-tested against different sets of test data generated earlier. What came out after multiple iterations was a model which gave us a success rate of 60% i.e. the model predicted the probability of success in candidate 60% of the time.
The organization has been using this for the last two years and the model has been fine-tuned to increase the success rate to 71%.
With the above case in discussion and similar situations, it would be great to see HR analytics playing important role in supporting HR functions and creating efficient and agile organisations.
About the authors: Srikant Rajan, is a Strategy & Technology Consultant email@example.com; Suneel Sharma is a Faculty in Information Technology and heads Professional Technology Programs and Big Data & Visual Analytics at SP Jain School of Global Management firstname.lastname@example.org
Try deep learning using MATLAB