As reported this month in HRM Magazine, Futurist Chris Riddell predicts that artificial intelligence will play a key role in business leadership in the next five years. Riddell says, “Artificial Intelligence will start to make decisions and will ‘co-pilot’ the running of business.” But, as the functionality of HRIS increases and technology generally gets smarter, will an employer’s reliance on the recommendations or decisions of a HRIS put the employer at risk?
There is no doubt that human resources information systems (HRIS) have changed the face of HR.
A good HRIS makes onboarding and compliance easier and can provide real-time data on an organisation’s workforce.
As reported this month in HRM Magazine, Futurist Chris Riddell predicts that artificial intelligence will play a key role in business leadership in the next five years. Riddell says, “Artificial Intelligence will start to make decisions and will ‘co-pilot’ the running of business.”
But, as the functionality of HRIS increases and technology generally gets smarter, will an employer’s reliance on the recommendations or decisions of a HRIS put the employer at risk? In this blog, we explore that question and also ask whether a computer program can be a true decision-maker, as required by law, when it comes to employment matters.
General protections claims
One of the most common claims brought under the Fair Work Act 2009 (Cth) (FW Act) is a general protections claim. General protections claims can be brought by a range of parties including unions, independent contractors or prospective employees. The most common general protection claims are brought by employees who believe that their employer has taken adverse action against them for a prohibited reason in contravention of the FW Act.
An adverse action is an action that an employer takes that is detrimental to an employee, such as dismissal, injury in employment (i.e. demotion), or discrimination between employees.
It is a breach of the FW Act for an employer to take adverse action against an employee for a prohibited reason. Prohibited reasons include union membership, the exercise of a workplace right or discrimination on the basis of a protected attribute such as age, race or sex.
When an employee makes a general protections claim, the court presumes that the adverse action taken by the employer was taken for the reason that the employee alleges, unless proven otherwise. This presumption means that the employer is shouldered with the burden of proving that the reason for taking any adverse action was not a prohibited reason.
In most cases, rebutting the presumption requires direct evidence from a decision-maker who must be able to satisfy the court that their decision to take adverse action was not made for a prohibited reason.
With the increased reliance on advanced technology in employee management and relations, the question arises – what if a decision-maker is a computer program?
What if...
For example, imagine an employer operating a warehouse that engages employees ranging in age from 18 to 60. The employer is experiencing a downturn in business and makes the decision to reduce costs by implementing redundancies.
Instead of having a HR Manager or team, the employer has recently invested in a state-of-the-art HRIS that performs a range of functions from recruitment to productivity tracking and reporting.
The employer’s CEO decides that the selection of employees for redundancy will be based on performance. To identify the lowest performing employees, the CEO asks the HRIS to produce a report with the names of the five poorest performing employees.
The HRIS generates the report and the CEO signs the redundancy letters of the five employees named in the HRIS report. All of the employees named on that report, and who have now been made redundant, are over 55 years old.
The redundant employees confer with each other and recognise the ‘age’ trend. Then, they all lodge general protections claims against their former employer on the basis that adverse action was taken against them (their employment was terminated) for a prohibited reason (being their age).
Who is the decision-maker?
Prior to the use of HRIS and the focus on big data, the selection of employees for redundancy was more often than not made by managers and HR professionals getting together and carefully examining their options and evaluating employees from many different perspectives – not just productivity output data but also considering loyalty, longevity, and attitude.
Those managers and HR professionals would have been the decision-makers and the ones to give evidence as to their reasons in a general protections matter.
In the above hypothetical example with the CEO and the HRIS report, identifying the decision-maker is more complicated. Was the decision-maker the CEO who read the HRIS report and signed the redundancy letters? Or, was the decision-maker the HRIS that selected the employees for redundancy based on productivity data?
In deciding general protections cases, the courts have said on numerous occasions that a decision to take adverse action can have many levels and that there can be more than one decision-maker or person involved in the final decision. Where those multiple decision-makers do not provide evidence, the employer may be unable to acquit the reverse onus of proof obligation.
In National Tertiary Education Union v Royal Melbourne Institute of Technology [2013] FCA 451 a redundant professor successfully established that she was dismissed because she exercised a workplace right. In defending its position to make the professor redundant, the employer called the person it considered to be the sole decision-maker to give evidence of her reasons. However, the Federal Court found that there were multiple people that were involved in the process of the final decision and who had endorsed termination of the professor’s employment. These other people were not called to give evidence and so the employer failed to rebut the reverse onus under the FW Act.
In Shizas v Commissioner of Police [2017] FCA 61 it was suggested that a doctor who denied a job candidate medical clearance was one of two decision-makers in a situation where the candidate was refused employment. The employer denied that either of the two proposed individuals were decision-makers, and neither provided evidence to the Federal Court. It was held that the reverse onus on the employer could not be rebutted because evidence of a decision-maker was not provided.
In our hypothetical example, the CEO has relied upon the report of the HRIS and made a decision based on that “recommendation.” As a key player in the decision to make five employees redundant, is the HRIS a relevant decision-maker? If the HRIS is a decision-maker, how can a computer program give direct evidence of its decision-making process? Should the programmer (who is not a decision-maker) be called to give evidence of how the HRIS produced the report? Furthermore, how can the CEO be sure that the HRIS did not factor in a prohibited reason (in this case, age) when it produced its report?
If the HRIS can’t give evidence and the court considers it a relevant decision-maker, the employer will fail to discharge the reverse onus of proof.
A new approach for a brave new world
As technology continues to impact on businesses, the employment relationship and the law, the potential of a great divide is becoming more and more apparent. To bridge that gap, who needs to catch up or wake up – the law, employers and/or software programmers?
The law – Does the law need to reconsider what is relevant and credible evidence of the decision-making process? Should special considerations be made when the basis of a decision is a recommendation from an advanced computer program?
Employers – For employers, there are clear risks of relying on a HRIS or other workplace reporting system. At present, a HRIS cannot give direct evidence of its decision-making and where a decision-maker cannot give direct evidence, an employer may be unable to defend its position due to the reverse onus of proof.
Programmers – Do programmers developing tools that manage employment issues need to be specially trained or have a legal education to adequately understand the implications of the systems, and uses of the systems, they are developing? For example, in developing an algorithm for assessing performance, can sick leave / disability / age / pregnancy be eliminated from the algorithm? Surely an employee who takes less sick leave is more productive, but taking sick leave is a workplace right. If an employer then relies on the measurement of performance (that includes an assessment of sick leave) to dismiss an employee, the employer opens itself up to a general protections claim.
It is clear that business, developers and the judicial system all need to act thoughtfully and consider the consequences of their approaches to rapidly advancing technology. True artificial intelligence is just around the corner and we need to start thinking seriously about what it will mean for the workplace and for the law.