Skip to main content

AIMS Seminar Series - 20th November 2020

Human judgements in AI loops

Abstract: In response to concerns about the ethics of algorithmic or AI-based decision-making, a commonly suggested safeguard is the insertion of a 'human in the loop', who is able to review and potentially overturn particular cases on their merits. The implied model of collaboration is one in which human reviewers attend to the individual circumstances of problematic cases; meanwhile, algorithms take care of inducing patterns across multiple cases to predict outputs. While seemingly reassuring, this approach may raise even more ethical questions than it resolves. Can it survive contact with the messy realities of AI deployment on the ground? Can we neatly apportion out the 'individual justice' element of a decision‐making process to a human, while preserving any benefits that algorithmic systems might bring? What complications might arise when these abstract models of collaboration are put into particular socio‐technical contexts, featuring actors with different agendas, incentives and power dynamics? How can we ensure the equitable distribution of human judgement, rather than granting human discretion to the privileged and condemning the powerless to automated decisions? This talk will discuss these and other questions raised by the 'human-in-the-loop' safeguard, and point towards a research agenda for meaningful human accountability in the context of AI.