(TNS) – With 86% of large US companies predicting that artificial intelligence will become a “mainstream technology” in their business this year, management by algorithm is no longer science fiction.
AI has already transformed the way workers are recruited, hired, trained, assessed and even fired. A recent study found that 83% of HR managers somehow rely on technology in making employment decisions.
For example, at UPS, AI monitors and reports on driver safety and productivity, tracking every movement of drivers from the time they buckle their seat belts to how often they put on their trucks. backwards. At IBM, AI identifies employee trends and makes recommendations that help managers make decisions about hiring, salary increases, professional development and employee retention. Even NFL teams use AI to assess player skills and assess injury risk during the recruiting process.
Amazon, a pioneer in the use of AI, has done everything possible by integrating technology throughout its business, especially in human resources. Just a few months ago, Amazon contract workers claimed they were summarily fired by automated emails for failing to meet pre-programmed productivity criteria.
In fact, Amazon’s use of an electronic tracking system made headlines for how it monitored worker productivity and, allegedly, automatically laid off employees it deemed underperforming.
According to a 2018 letter written by Amazon lawyers, if an employee spent too much time on leave, the system “automatically generates (d) … warnings or terminations regarding quality or productivity without the intervention of supervisors.” . An Amazon spokesperson then clarified: “It is absolutely not true that employees are fired through an automatic system. We would never fire an employee without first making sure they received our most comprehensive support, including dedicated coaching to help them improve and additional training.
These reports highlight the need for employers to find the right division of labor between artificial intelligence and human resources staff – between using AI to improve human decision-making and delegating decision-making entirely to algorithms.
Using AI to make decisions ordinarily made by HR professionals can have significant legal ramifications, so employers should exercise caution in deciding when – and if – to hand these matters over to algorithms. There may be instances where compliance with federal anti-discrimination law requires human intervention. This is frequently the case when it comes to workplace arrangements for pregnant, disabled and religious employees.
The Americans with Disabilities Act requires covered employers to provide reasonable accommodations to people with disabilities. Likewise, under Title VII of the Civil Rights Act of 1964, employers cannot discriminate on the basis of pregnancy and employees’ religious practices. Usually, these accommodations are made through an interactive process between employer and employee: two humans.
Most of the time, an employee initiates the interactive process by informing the employer of the need for reasonable accommodation – a conversation that can be sensitive, personal, and even difficult for employees. If an employee’s primary interface with their employer is an app or algorithm, initiating this process can be intimidating, and employees may not be willing to divulge some of their most personal and protected issues to someone else. chatbot. Besides, it may not even be clear to the employee who is the appropriate point of contact.
It may be surprising that there are cases in which an employer can be expected to initiate the interactive process uninvited, for example, if the employer knows that the employee is having problems at work. due to a disability. In these circumstances, the process often begins when a supervisor senses, through their own eyes or judgment, that an employee needs intervention.
As a result, employees and civil rights activists have expressed concern over whether the use of AI in employment decision-making enables a process so heavily reliant on personal interactions. An algorithm, no matter how sophisticated, may not be capable of the kind of sensitivity and responsiveness necessary to meet the needs of employees in need of accommodations.
Whether employers rely on algorithms, human resource professionals, or both, they need to develop and implement policies to handle a variety of more nuanced employee situations. If an employer uses AI to assess performance and track productivity, the employer must ensure that their AI system allows for – and takes into account – reasonable accommodations related to disability, pregnancy and adherence. religious.
Importantly, employers should inform their employees that the obligation to engage in an interactive process for an accommodation under ADA and Title VII still applies when the employer uses AI to track productivity.
As AI becomes a dominant technology in the workplace, algorithmic discrimination shouldn’t.
© 2021 Chicago Tribune. Distributed by Tribune Content Agency, LLC.