Abstract:
This paper investigates the influence of algorithmic decision-making on employees’ perceptions of fairness within the context of organizational management,particularly in light of the rapid advancements in artificial intelligence (AI).Leveraging the Stereotype Content Model (SCM),the study explores differential fairness perceptions between algorithmic and human decision-makers,particularly in adverse outcome scenarios.Findings suggest that algorithms are generally viewed as fairer than human managers,with perceptions influenced significantly by the type of task being evaluated.
As AI technologies continue to permeate various business operations,organizations increasingly deploy algorithms for diverse managerial functions,including human resources management,task allocation,and performance evaluations.This shift raises critical questions about how employees perceive the fairness of decisions made by algorithms as opposed to human managers.Given that fairness perceptions are pivotal to employee satisfaction,organizational commitment,and performance,understanding these dynamics is essential for effective organizational management.
Existing research on algorithmic decision-making offers mixed insights into its impact on perceived fairness.Some studies argue that algorithms,by relying on data and objective models,minimize human biases and enhance fairness.Others,however,highlight potential shortcomings such as neglect of qualitative information and lack of transparency,which can lead to perceived unfairness.Additionally,the literature suggests a gap in understanding the effects of decision outcomes on fairness perceptions,particularly when outcomes are unfavorable,thus forming the basis of this study’s inquiry.
This study employs scenario-based experiments to examine how employees perceive fairness when confronted with unfavorable decisions executed by either algorithms or human managers.These experiments are designed to cover a range of task subjectivities,from highly objective,data-driven tasks to those requiring significant human judgment and intuition.
The experimental results reveal that decision-maker type and task nature significantly affect fairness perceptions.In scenarios involving objective tasks,algorithms are perceived as fairer due to their presumed impartiality and lack of human error.For subjective tasks,algorithms are still viewed more favorably,but this is attributed to human managers being perceived as potentially indifferent or lacking empathy.This dichotomy underscores the complexity of fairness perceptions and suggests that while algorithms may excel in objectivity,they may fall short in areas requiring emotional intelligence.
This research adds depth to the discussion on AI’s role in management by delineating how task type and outcome influence fairness perceptions differently under algorithmic versus human decision-making.It offers insights that could help organizations better integrate AI into their management practices,ensuring that fairness perceptions are carefully managed to maintain employee satisfaction and performance.
Future researches could broaden the investigation into other organizational contexts and include longitudinal studies to assess how fairness perceptions evolve with long-term exposure to algorithmic decision-making.Moreover,further studies could explore how increased transparency and employee involvement in algorithm development might enhance trust and fairness perceptions,fostering a more equitable organizational environment.