HR analytics lets HR make better decisions on the basis of historical information of employee performance. For example, if data suggests that some of your best talent have certain education background, hobbies or profile, you will be able to screen profiles from the candidates pool and get those who are most likely to be successful. This would mean lower cost of recruitment, reduction attrition in future and better business results. The availability of online databases, applications, profiles in social media and career directories, documents, etc. today enables how we can improve the effectiveness of recruitment and easily learn more about applicants.
Similarly, we can use online databased and career directories to build profiles and job descriptions based on how other organizations define such roles and the availability of talent pool in the market. This is higher success rate during not only recruitment but also in retention
Using historic data of employee performance and specific conditions that led an employee to performance better, HR managers can using Clustering Models to put together teams of like minded employees where every individual performs it his/her best. Similarly, inconsistent performance, spikes or drops in performance can help HR analysts identify key drivers for such pattern.
This is one of the most widely used application or example for HR Analytics. By using historic data related to employees it is possible to used Machine Learning (ML) classification models is very accurately predict employees who are most likely to leave the organization. This is called as Predictive Model for Employee Attrition. The model provides the propensity or probability that an employee would leave in near future. This data based approach can replace RAG (Red/Amber/Green) colour codes that HRBP’s use to classify employees based on high flight risk.
Linking Performance to Pay is a ever green topic in HR. With performance data that goes beyond performance rating, C&B professionals can build statistical models to validate if the increased compensation and benefits to an individual can result in justifiable business performance improvement. Further data analytics can be used to profile employees based on the value they see various benefits provided by the organization and personalize the package.
L&D can play pivotal in enhancing business performance and building a future fit workforce by using the data to identify training needs, establish quantitative effectiveness measures for L&D interventions and statistically prove effectiveness of the program. For example, using wearables, L&D professionals can capture real time data of employees heart rate, to ascertain the effectiveness of the learning module covered in training. Data can be used to design effective intervention.
And most importantly, L&D can rescue themselves from the perception of being providers of different career development programmes that deplete a large part of the company’s budget.
#nilakantasrinivasan-j #canopus-business-management-group #B2B-client-centric-growth #HR-analytics
In the recent years, most business functions have undergone a transformation because of the power of Big Data, Cloud Storage and Analytics. The Digitization wave that is sweeping the industry now is nothing but an outcome of the synergy of various technology developments over the past 2 decades. HR is no exception to this. HR Analytics and Big Data have provided the ability to HR leaders to take “intuitions” out of their decisions, that have been the norm before and replace that with informed decisions based on data. The use of HR analytics has made official decisions more promising and accurate.
For this reason, today many companies invest tremendous resources on talent management tools and skilled staff including data scientists, analytics and analysts.
Nevertheless, there’s a lot more to do in this area. According to a Deloitte survey, 3 out of four businesses (75%) believe the data analytics use is “important” but only 8% think that their organisation, is strong in analytics. (The same figure as in 2014).
HR Analytics can touch every division of HR and improve its decision making including Talent Acquisition and Management, Compensation and Benefits, Performance Management, HR Operations, Learning and Development, Leadership Development, etc. .
Most organizations today sit on a pile of data, thanks to HRMS & Cloud storage. However, in the absence of a proper HR analytics tool or necessary capability in HR professionals, these useful data or information we are talking about might be scattered and unused. Organizations are now getting to accept that Analytics is more about capability and less about acquiring fancy technological tools.
A HR Professional with right Analytics capability can interpret and transform this valuable data in useful statistics using HR and big data analytics to insights. HR will determine what to do on the basis of the results until trends are illustrated. The impact of HR metrics on organisational performance is analysed using analytics and that can enables leaders to take proactive decisions.
HR Analytics can also help in addressing problems that organizations face. For example, High performers exit an organisation more often than low-performers, and if so, what leads to that turnover? Data based insights can empower business leaders to take right decisions regarding talent rather than mulling over intuitions or finger pointing between HR and Business.
Here are 6 big Benefits of HR Analytics
It is very common to see that HR function in isolation vis-a vis the business. If you don’t agree with me, find out what business leaders do when HR slides are put up in Management Committee presentations and what HR head does when Business slides are put up? Most HR metrics, processes, & policies are benchmarked with respect to industry and competition, but very rarely they are aligned to hard hitting reality of their own business. For example, just by aligning HR metrics to business metrics, such as HR Cost per Revenue or HR Cost per unit sold, Revenue per employee, Average Lead time to productivity, HR professionals can take the first step towards better alignment to business strategy.
Complex decisions regarding the hiring, employee performance, career progression, internal movements, etc have direct impact on business strategy. When HR Analytics can provide insights on which employee is most likely to be productive in a new role, who is most likely to accept a internal job movement based on historic data, how long is it likely to take to close a critical position, based on data, HR seamlessly aligns with Business needs and strategy.
Not long ago, HR was marred with policy paralysis. Organizations had HR policies for everything. Processes were built for those policies and not for people who would use, manage or benefit from it. HR Automation is many ways has helped organizations their standardize processes. Whether it’s about Leave Approval, Employee Escalations, Reimbursements, Payroll, etc.
When we have meaningful data that provide us insights about processes, we will be able to take decisions that matter the most for our employees. For example, an organization introduced flexible working hours for its executives just because everyone in the market is doing that. And because few employees asked for it. Few weeks into the few flexi working system, surprisingly, they found that most employees wouldn’t avail this benefit. Data suggested that 90% of employees commute to work using company shuttle as the organization is located in an industrial suburb. So just by looking into data, organizations can build processes that are meaningful rather than what is an industry norm.
Another popular example is that of Google reducing the number of rounds of interviews based on data, thereby improving candidate experience, interviewer experience and cutting down on the lead time to hire.
Insights from the data across employee lifecycle can help HR managers emotionally connect with employees, build personalization , etc., For example, if an employee struggles to comply with certain HR policies, data of can provide timely insights on how the organization can support the employee in bettering his experience during the tenure thereby creating a win-win situation.
Data insights from HR analytics can suggest to us which candidates are likely to get selected, which are likely to perform well, if selected, thereby enabling the business to increase its performance and success rate. Such insights can be used in not only hiring, but in career progression, retention, learning and development, etc., For example, it would be an invaluable insight if HR can suggest which employees are likely to perform together without conflict, if business wants a put a new team together.
HR Analytics can help HR managers identify blind spots as far as leakage is concerned. For example, how much increment should we give a candidate, what are the increment slabs that the organization have so as to keep employee attrition within certain level, and so on.
Ultimately, it is every HR head’s dream to build an organization that employee’s love to work for -One where employees wake up every morning and say, ‘here’s another great day’. Instead of being a copy cat and experimenting with what works for other best employers in your industry or country, delving into data can cull out insights on what your employees love, relish and dislike.
#nilakantasrinivasan-j #canopus-business-management-group #B2B-client-centric-growth #HR-analytics #big-data #HR-metrics
Lean and Six Sigma are close cousins in the process improvement world and they have lot of commonalities. Now we will talk about the difference between Six Sigma and Lean Six Sigma.
Six Sigma uses a data centric analytical approach to problem solving and process improvements. That means, there would be time and effort in data collection and analysis. While this sounds very logical to any problem solving approach, there can be practical challenges.
For example, some times we may need data and analysis to be even prove the obvious. That is lame.
On the other hand, Lean Six Sigma brings in some of the principles of Lean. Lean is largely a pragmatic and prescriptive approach. Which implies that we will look at data and practically validate that problem and move on to prescriptive solutions.
Thus combining Lean with Six Sigma, helps in reducing the time and effort needed to analyze or improve a situation. Lean will bring in a set of solutions that are tried and tested for a situation. For example, if you have high inventory, that Lean would suggest you to implement Kanban.
Lean is appealing because most often it simplifies the situation and that may not be always true with Six Sigma. However the flip side to Lean is that if the system have been improved several times and reached a certain level of performance and consistency, Lean can bring out any further improvement unless we approach the problem with Six Sigma lens, using extensive data collection and analysis.
Looking at the body of knowledge of Six Sigma and Lean Six Sigma, you will find that Lean Six Sigma courses following tools:
#nilakantasrinivasan-j #canopus-business-management-group #B2B-client-centric-growth #Lean-six-sigma #six-sigma-green-belt-certification #six-sigma-black-belt-certification
Here are the set the analytics that has been run on this data set
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(color_codes =True)
%matplotlib inline
diab=pd.read_csv("diabetes.csv")
diab.head()
In this data set, Outcome is the Dependent Variable and Remaining 8 variables are independent variables.
diab.isnull().values.any()
## To check if data contains null values
diab.describe()
## To run numerical descriptive stats for the data set
(diab.Pregnancies == 0).sum(),(diab.Glucose==0).sum(),(diab.BloodPressure==0).sum(),(diab.SkinThickness==0).sum(),(diab.Insulin==0).sum(),(diab.BMI==0).sum(),(diab.DiabetesPedigreeFunction==0).sum(),(diab.Age==0).sum()
## Counting cells with 0 Values for each variable and publishing the counts below
## Creating a dataset called 'dia' from original dataset 'diab' with excludes all rows with have zeros only for Glucose, BP, Skinthickness, Insulin and BMI, as other columns can contain Zero values.
drop_Glu=diab.index[diab.Glucose == 0].tolist()
drop_BP=diab.index[diab.BloodPressure == 0].tolist()
drop_Skin = diab.index[diab.SkinThickness==0].tolist()
drop_Ins = diab.index[diab.Insulin==0].tolist()
drop_BMI = diab.index[diab.BMI==0].tolist()
c=drop_Glu+drop_BP+drop_Skin+drop_Ins+drop_BMI
dia=diab.drop(diab.index[c])
dia.info()
dia.describe()
dia1 = dia[dia.Outcome==1]
dia0 = dia[dia.Outcome==0]
dia1
dia0
## creating count plot with title using seaborn
sns.countplot(x=dia.Outcome)
plt.title("Count Plot for Outcome")
# Computing the %age of diabetic and non-diabetic in the sample
Out0=len(dia[dia.Outcome==1])
Out1=len(dia[dia.Outcome==0])
Total=Out0+Out1
PC_of_1 = Out1*100/Total
PC_of_0 = Out0*100/Total
PC_of_1, PC_of_0
## Creating 3 subplots - 1st for histogram, 2nd for histogram segmented by Outcome and 3rd for representing same segmentation using boxplot
plt.figure(figsize=(20, 6))
plt.subplot(1,3,1)
sns.set_style("dark")
plt.title("Histogram for Pregnancies")
sns.distplot(dia.Pregnancies,kde=False)
plt.subplot(1,3,2)
sns.distplot(dia0.Pregnancies,kde=False,color="Blue", label="Preg for Outome=0")
sns.distplot(dia1.Pregnancies,kde=False,color = "Gold", label = "Preg for Outcome=1")
plt.title("Histograms for Preg by Outcome")
plt.legend()
plt.subplot(1,3,3)
sns.boxplot(x=dia.Outcome,y=dia.Pregnancies)
plt.title("Boxplot for Preg by Outcome")
plt.figure(figsize=(20, 6))
plt.subplot(1,3,1)
plt.title("Histogram for Glucose")
sns.distplot(dia.Glucose, kde=False)
plt.subplot(1,3,2)
sns.distplot(dia0.Glucose,kde=False,color="Gold", label="Gluc for Outcome=0")
sns.distplot(dia1.Glucose, kde=False, color="Blue", label = "Gloc for Outcome=1")
plt.title("Histograms for Glucose by Outcome")
plt.legend()
plt.subplot(1,3,3)
sns.boxplot(x=dia.Outcome,y=dia.Glucose)
plt.title("Boxplot for Glucose by Outcome")
plt.figure(figsize=(20, 6))
plt.subplot(1,3,1)
sns.distplot(dia.BloodPressure, kde=False)
plt.title("Histogram for Blood Pressure")
plt.subplot(1,3,2)
sns.distplot(dia0.BloodPressure,kde=False,color="Gold",label="BP for Outcome=0")
sns.distplot(dia1.BloodPressure,kde=False, color="Blue", label="BP for Outcome=1")
plt.legend()
plt.title("Histogram of Blood Pressure by Outcome")
plt.subplot(1,3,3)
sns.boxplot(x=dia.Outcome,y=dia.BloodPressure)
plt.title("Boxplot of BP by Outcome")
plt.figure(figsize=(20, 6))
plt.subplot(1,3,1)
sns.distplot(dia.SkinThickness, kde=False)
plt.title("Histogram for Skin Thickness")
plt.subplot(1,3,2)
sns.distplot(dia0.SkinThickness, kde=False, color="Gold", label="SkinThick for Outcome=0")
sns.distplot(dia1.SkinThickness, kde=False, color="Blue", label="SkinThick for Outcome=1")
plt.legend()
plt.title("Histogram for SkinThickness by Outcome")
plt.subplot(1,3,3)
sns.boxplot(x=dia.Outcome, y=dia.SkinThickness)
plt.title("Boxplot of SkinThickness by Outcome")
plt.figure(figsize=(20, 6))
plt.subplot(1,3,1)
sns.distplot(dia.Insulin,kde=False)
plt.title("Histogram of Insulin")
plt.subplot(1,3,2)
sns.distplot(dia0.Insulin,kde=False, color="Gold", label="Insulin for Outcome=0")
sns.distplot(dia1.Insulin,kde=False, color="Blue", label="Insuline for Outcome=1")
plt.title("Histogram for Insulin by Outcome")
plt.legend()
plt.subplot(1,3,3)
sns.boxplot(x=dia.Outcome, y=dia.Insulin)
plt.title("Boxplot for Insulin by Outcome")
plt.figure(figsize=(20, 6))
plt.subplot(1,3,1)
sns.distplot(dia.BMI, kde=False)
plt.title("Histogram for BMI")
plt.subplot(1,3,2)
sns.distplot(dia0.BMI, kde=False,color="Gold", label="BMI for Outcome=0")
sns.distplot(dia1.BMI, kde=False, color="Blue", label="BMI for Outcome=1")
plt.legend()
plt.title("Histogram for BMI by Outcome")
plt.subplot(1,3,3)
sns.boxplot(x=dia.Outcome, y=dia.BMI)
plt.title("Boxplot for BMI by Outcome")
plt.figure(figsize=(20, 6))
plt.subplot(1,3,1)
sns.distplot(dia.DiabetesPedigreeFunction,kde=False)
plt.title("Histogram for Diabetes Pedigree Function")
plt.subplot(1,3,2)
sns.distplot(dia0.DiabetesPedigreeFunction, kde=False, color="Gold", label="PedFunction for Outcome=0")
sns.distplot(dia1.DiabetesPedigreeFunction, kde=False, color="Blue", label="PedFunction for Outcome=1")
plt.legend()
plt.title("Histogram for DiabetesPedigreeFunction by Outcome")
plt.subplot(1,3,3)
sns.boxplot(x=dia.Outcome, y=dia.DiabetesPedigreeFunction)
plt.title("Boxplot for DiabetesPedigreeFunction by Outcome")
plt.figure(figsize=(20, 6))
plt.subplot(1,3,1)
sns.distplot(dia.Age,kde=False)
plt.title("Histogram for Age")
plt.subplot(1,3,2)
sns.distplot(dia0.Age,kde=False,color="Gold", label="Age for Outcome=0")
sns.distplot(dia1.Age,kde=False, color="Blue", label="Age for Outcome=1")
plt.legend()
plt.title("Histogram for Age by Outcome")
plt.subplot(1,3,3)
sns.boxplot(x=dia.Outcome,y=dia.Age)
plt.title("Boxplot for Age by Outcome")
Inference: None of the variables are normal. (P>0.05) May be subsets are normal
## importing stats module from scipy
from scipy import stats
## retrieving p value from normality test function
PregnanciesPVAL=stats.normaltest(dia.Pregnancies).pvalue
GlucosePVAL=stats.normaltest(dia.Glucose).pvalue
BloodPressurePVAL=stats.normaltest(dia.BloodPressure).pvalue
SkinThicknessPVAL=stats.normaltest(dia.SkinThickness).pvalue
InsulinPVAL=stats.normaltest(dia.Insulin).pvalue
BMIPVAL=stats.normaltest(dia.BMI).pvalue
DiaPeFuPVAL=stats.normaltest(dia.DiabetesPedigreeFunction).pvalue
AgePVAL=stats.normaltest(dia.Age).pvalue
## Printing the values
print("Pregnancies P Value is " + str(PregnanciesPVAL))
print("Glucose P Value is " + str(GlucosePVAL))
print("BloodPressure P Value is " + str(BloodPressurePVAL))
print("Skin Thickness P Value is " + str(SkinThicknessPVAL))
print("Insulin P Value is " + str(InsulinPVAL))
print("BMI P Value is " + str(BMIPVAL))
print("Diabetes Pedigree Function P Value is " + str(DiaPeFuPVAL))
print("Age P Value is " + str(AgePVAL))
sns.pairplot(dia, vars=["Pregnancies", "Glucose","BloodPressure","SkinThickness","Insulin", "BMI","DiabetesPedigreeFunction", "Age"],hue="Outcome")
plt.title("Pairplot of Variables by Outcome")
cor = dia.corr(method ='pearson')
cor
sns.heatmap(cor)
cols=["Pregnancies", "Glucose","BloodPressure","SkinThickness","Insulin", "BMI","DiabetesPedigreeFunction", "Age"]
X=dia[cols]
y=dia.Outcome
## Importing stats models for running logistic regression
import statsmodels.api as sm
## Defining the model and assigning Y (Dependent) and X (Independent Variables)
logit_model=sm.Logit(y,X)
## Fitting the model and publishing the results
result=logit_model.fit()
print(result.summary())
cols2=["Pregnancies", "Glucose","BloodPressure","SkinThickness","BMI"]
X=dia[cols2]
logit_model=sm.Logit(y,X)
result=logit_model.fit()
print(result.summary2())
cols3=["Pregnancies", "Glucose","BloodPressure","SkinThickness"]
X=dia[cols3]
logit_model=sm.Logit(y,X)
result=logit_model.fit()
print(result.summary())
cols4=["Pregnancies", "Glucose","BloodPressure"]
X=dia[cols4]
logit_model=sm.Logit(y,X)
result=logit_model.fit()
print(result.summary())
## Importing LogisticRegression from Sk.Learn linear model as stats model function cannot give us classification report and confusion matrix
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
cols4=["Pregnancies", "Glucose","BloodPressure"]
X=dia[cols4]
y=dia.Outcome
logreg.fit(X,y)
## Defining the y_pred variable for the predicting values. I have taken 392 dia dataset. We can also take a test dataset
y_pred=logreg.predict(X)
## Calculating the precision of the model
from sklearn.metrics import classification_report
print(classification_report(y,y_pred))
from sklearn.metrics import confusion_matrix
## Confusion matrix gives the number of cases where the model is able to accurately predict the outcomes.. both 1 and 0 and how many cases it gives false positive and false negatives
confusion_matrix = confusion_matrix(y, y_pred)
print(confusion_matrix)
Digital Transformation, Artificial Intelligence, Industry 4.0, IoT, RPA, etc are some of the buzz words that are bringing shivers in the spine of many executives. To be fair, actually many are excited about the future and the opportunities that these tools and methods present.
One side of the coin
A few months ago, in a conversation with the Head of Business Excellence of a MNC in manufacturing sector where TQM & other similar practices are deeply rooted, he said that this year their focus is Industry 4.0 and there are no budgets for any other initiative. He said that TQM, Lean Six Sigma, etc are concepts that are gone past their half life and in this new age, everything will be automated sooner or later. So no Kaizens will be needed, no Six Sigma DMIAC projects will be needed and so is Value Stream Mapping. And as automated processes are highly efficient, there will be no need for improvement projects. He had a point. Instead of dismissing the idea or accepting, it is good to consider how to navigate through these new age developments.
Now the other side of the coin
Another friend of mine who steers strategy and business development for a global digital transformation solutions provideracross sectors recently reached out to me. The quest was to find ways to help their clients to speed up the adoption of digital technologies and reduce internal resistance. He said the problem was to do with their culture. Here is a quick summary of what transpired:
So, there is no doubt that new Digital technologies will put you in a new orbit, but soon that orbit will become a slow lane. In the ’90s, ERP wave swept the industry, then it was CRM, and then BI, and then Cloud, and then Big Data, and now it is AI, Robotics & IoT.
So ultimately these technology tools enable business but nothing can beat an organization that has the following competencies ingrained in their culture:
Whether you call it Agile, DevOps, Six Sigma, Lean, TQM or BE, these frameworks rely on the same fundamental principles mentioned above.
So, to sum up TQM or any such Business Excellence frameworks are enablers for Digital Transformation and cannot be replaced by AI, IoT, Industry 4.0
After few months, when we talked again, he said they are strategizing on Industry 4.0 and not started any real work.
We have created an assessment to evaluate the Digital Transformation Culture of an organization. There are 3 broad areas –
Gap assessment will be in the following manner:
Sign-up for collaborat newsletter